Add OpenAI Responses API vm_run integration example
This commit is contained in:
parent
0aa5e25dc1
commit
f7c8a4366b
4 changed files with 263 additions and 0 deletions
|
|
@ -13,6 +13,7 @@ It also ships an MCP server so LLM clients can use the same VM runtime through t
|
||||||
|
|
||||||
- Install: [docs/install.md](/home/thales/projects/personal/pyro/docs/install.md)
|
- Install: [docs/install.md](/home/thales/projects/personal/pyro/docs/install.md)
|
||||||
- Host requirements: [docs/host-requirements.md](/home/thales/projects/personal/pyro/docs/host-requirements.md)
|
- Host requirements: [docs/host-requirements.md](/home/thales/projects/personal/pyro/docs/host-requirements.md)
|
||||||
|
- Integration targets: [docs/integrations.md](/home/thales/projects/personal/pyro/docs/integrations.md)
|
||||||
- Public contract: [docs/public-contract.md](/home/thales/projects/personal/pyro/docs/public-contract.md)
|
- Public contract: [docs/public-contract.md](/home/thales/projects/personal/pyro/docs/public-contract.md)
|
||||||
- Troubleshooting: [docs/troubleshooting.md](/home/thales/projects/personal/pyro/docs/troubleshooting.md)
|
- Troubleshooting: [docs/troubleshooting.md](/home/thales/projects/personal/pyro/docs/troubleshooting.md)
|
||||||
|
|
||||||
|
|
@ -142,6 +143,7 @@ pyro demo ollama -v
|
||||||
- Python one-shot SDK example: [examples/python_run.py](/home/thales/projects/personal/pyro/examples/python_run.py)
|
- Python one-shot SDK example: [examples/python_run.py](/home/thales/projects/personal/pyro/examples/python_run.py)
|
||||||
- Python lifecycle example: [examples/python_lifecycle.py](/home/thales/projects/personal/pyro/examples/python_lifecycle.py)
|
- Python lifecycle example: [examples/python_lifecycle.py](/home/thales/projects/personal/pyro/examples/python_lifecycle.py)
|
||||||
- MCP client config example: [examples/mcp_client_config.md](/home/thales/projects/personal/pyro/examples/mcp_client_config.md)
|
- MCP client config example: [examples/mcp_client_config.md](/home/thales/projects/personal/pyro/examples/mcp_client_config.md)
|
||||||
|
- OpenAI Responses API example: [examples/openai_responses_vm_run.py](/home/thales/projects/personal/pyro/examples/openai_responses_vm_run.py)
|
||||||
- Agent-ready `vm_run` example: [examples/agent_vm_run.py](/home/thales/projects/personal/pyro/examples/agent_vm_run.py)
|
- Agent-ready `vm_run` example: [examples/agent_vm_run.py](/home/thales/projects/personal/pyro/examples/agent_vm_run.py)
|
||||||
|
|
||||||
## Python SDK
|
## Python SDK
|
||||||
|
|
|
||||||
93
docs/integrations.md
Normal file
93
docs/integrations.md
Normal file
|
|
@ -0,0 +1,93 @@
|
||||||
|
# Integration Targets
|
||||||
|
|
||||||
|
These are the main ways to integrate `pyro-mcp` into an LLM application.
|
||||||
|
|
||||||
|
## Recommended Default
|
||||||
|
|
||||||
|
Use `vm_run` first.
|
||||||
|
|
||||||
|
That keeps the model-facing contract small:
|
||||||
|
|
||||||
|
- one tool
|
||||||
|
- one command
|
||||||
|
- one ephemeral VM
|
||||||
|
- automatic cleanup
|
||||||
|
|
||||||
|
Only move to lifecycle tools when the agent truly needs VM state across multiple calls.
|
||||||
|
|
||||||
|
## OpenAI Responses API
|
||||||
|
|
||||||
|
Best when:
|
||||||
|
|
||||||
|
- your agent already uses OpenAI models directly
|
||||||
|
- you want a normal tool-calling loop instead of MCP transport
|
||||||
|
- you want the smallest amount of integration code
|
||||||
|
|
||||||
|
Recommended surface:
|
||||||
|
|
||||||
|
- `vm_run`
|
||||||
|
|
||||||
|
Canonical example:
|
||||||
|
|
||||||
|
- [examples/openai_responses_vm_run.py](/home/thales/projects/personal/pyro/examples/openai_responses_vm_run.py)
|
||||||
|
|
||||||
|
## MCP Clients
|
||||||
|
|
||||||
|
Best when:
|
||||||
|
|
||||||
|
- your host application already supports MCP
|
||||||
|
- you want `pyro` to run as an external stdio server
|
||||||
|
- you want tool schemas to be discovered directly from the server
|
||||||
|
|
||||||
|
Recommended entrypoint:
|
||||||
|
|
||||||
|
- `pyro mcp serve`
|
||||||
|
|
||||||
|
Starter config:
|
||||||
|
|
||||||
|
- [examples/mcp_client_config.md](/home/thales/projects/personal/pyro/examples/mcp_client_config.md)
|
||||||
|
|
||||||
|
## Direct Python SDK
|
||||||
|
|
||||||
|
Best when:
|
||||||
|
|
||||||
|
- your application owns orchestration itself
|
||||||
|
- you do not need MCP transport
|
||||||
|
- you want direct access to `Pyro`
|
||||||
|
|
||||||
|
Recommended default:
|
||||||
|
|
||||||
|
- `Pyro.run_in_vm(...)`
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- [examples/python_run.py](/home/thales/projects/personal/pyro/examples/python_run.py)
|
||||||
|
- [examples/python_lifecycle.py](/home/thales/projects/personal/pyro/examples/python_lifecycle.py)
|
||||||
|
|
||||||
|
## Agent Framework Wrappers
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- LangChain tools
|
||||||
|
- PydanticAI tools
|
||||||
|
- custom in-house orchestration layers
|
||||||
|
|
||||||
|
Best when:
|
||||||
|
|
||||||
|
- you already have an application framework that expects a Python callable tool
|
||||||
|
- you want to wrap `vm_run` behind framework-specific abstractions
|
||||||
|
|
||||||
|
Recommended pattern:
|
||||||
|
|
||||||
|
- keep the framework wrapper thin
|
||||||
|
- map framework tool input directly onto `vm_run`
|
||||||
|
- avoid exposing lifecycle tools unless the framework truly needs them
|
||||||
|
|
||||||
|
## Selection Rule
|
||||||
|
|
||||||
|
Choose the narrowest integration that matches the host environment:
|
||||||
|
|
||||||
|
1. OpenAI Responses API if you want a direct provider tool loop.
|
||||||
|
2. MCP if your host already speaks MCP.
|
||||||
|
3. Python SDK if you own orchestration and do not need transport.
|
||||||
|
4. Framework wrappers only as thin adapters over the same `vm_run` contract.
|
||||||
98
examples/openai_responses_vm_run.py
Normal file
98
examples/openai_responses_vm_run.py
Normal file
|
|
@ -0,0 +1,98 @@
|
||||||
|
"""Canonical OpenAI Responses API integration centered on vm_run.
|
||||||
|
|
||||||
|
Requirements:
|
||||||
|
- `pip install openai` or `uv add openai`
|
||||||
|
- `OPENAI_API_KEY`
|
||||||
|
|
||||||
|
This example keeps the model-facing contract intentionally small: one `vm_run`
|
||||||
|
tool that creates an ephemeral VM, runs one command, and cleans up.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from pyro_mcp import Pyro
|
||||||
|
|
||||||
|
DEFAULT_MODEL = "gpt-5"
|
||||||
|
|
||||||
|
OPENAI_VM_RUN_TOOL: dict[str, Any] = {
|
||||||
|
"type": "function",
|
||||||
|
"name": "vm_run",
|
||||||
|
"description": "Run one command in an ephemeral Firecracker VM and clean it up.",
|
||||||
|
"strict": True,
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"profile": {"type": "string"},
|
||||||
|
"command": {"type": "string"},
|
||||||
|
"vcpu_count": {"type": "integer"},
|
||||||
|
"mem_mib": {"type": "integer"},
|
||||||
|
"timeout_seconds": {"type": "integer"},
|
||||||
|
"ttl_seconds": {"type": "integer"},
|
||||||
|
"network": {"type": "boolean"},
|
||||||
|
},
|
||||||
|
"required": ["profile", "command", "vcpu_count", "mem_mib"],
|
||||||
|
"additionalProperties": False,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def call_vm_run(arguments: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
pyro = Pyro()
|
||||||
|
return pyro.run_in_vm(
|
||||||
|
profile=str(arguments["profile"]),
|
||||||
|
command=str(arguments["command"]),
|
||||||
|
vcpu_count=int(arguments["vcpu_count"]),
|
||||||
|
mem_mib=int(arguments["mem_mib"]),
|
||||||
|
timeout_seconds=int(arguments.get("timeout_seconds", 30)),
|
||||||
|
ttl_seconds=int(arguments.get("ttl_seconds", 600)),
|
||||||
|
network=bool(arguments.get("network", False)),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def run_openai_vm_run_example(*, prompt: str, model: str = DEFAULT_MODEL) -> str:
|
||||||
|
from openai import OpenAI # type: ignore[import-not-found]
|
||||||
|
|
||||||
|
client = OpenAI()
|
||||||
|
input_items: list[dict[str, Any]] = [{"role": "user", "content": prompt}]
|
||||||
|
|
||||||
|
while True:
|
||||||
|
response = client.responses.create(
|
||||||
|
model=model,
|
||||||
|
input=input_items,
|
||||||
|
tools=[OPENAI_VM_RUN_TOOL],
|
||||||
|
)
|
||||||
|
input_items.extend(response.output)
|
||||||
|
|
||||||
|
tool_calls = [item for item in response.output if item.type == "function_call"]
|
||||||
|
if not tool_calls:
|
||||||
|
return str(response.output_text)
|
||||||
|
|
||||||
|
for tool_call in tool_calls:
|
||||||
|
if tool_call.name != "vm_run":
|
||||||
|
raise RuntimeError(f"unexpected tool requested: {tool_call.name}")
|
||||||
|
result = call_vm_run(json.loads(tool_call.arguments))
|
||||||
|
input_items.append(
|
||||||
|
{
|
||||||
|
"type": "function_call_output",
|
||||||
|
"call_id": tool_call.call_id,
|
||||||
|
"output": json.dumps(result, sort_keys=True),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
model = os.environ.get("OPENAI_MODEL", DEFAULT_MODEL)
|
||||||
|
prompt = (
|
||||||
|
"Use the vm_run tool to run `git --version` in an ephemeral VM. "
|
||||||
|
"Use the debian-git profile with 1 vCPU and 1024 MiB of memory. "
|
||||||
|
"Do not use networking for this request."
|
||||||
|
)
|
||||||
|
print(run_openai_vm_run_example(prompt=prompt, model=model))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
70
tests/test_openai_example.py
Normal file
70
tests/test_openai_example.py
Normal file
|
|
@ -0,0 +1,70 @@
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import importlib.util
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from types import ModuleType, SimpleNamespace
|
||||||
|
from typing import Any, cast
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
def _load_openai_example_module() -> ModuleType:
|
||||||
|
path = Path("examples/openai_responses_vm_run.py")
|
||||||
|
spec = importlib.util.spec_from_file_location("openai_responses_vm_run", path)
|
||||||
|
if spec is None or spec.loader is None:
|
||||||
|
raise AssertionError("failed to load OpenAI example module")
|
||||||
|
module = importlib.util.module_from_spec(spec)
|
||||||
|
spec.loader.exec_module(module)
|
||||||
|
return module
|
||||||
|
|
||||||
|
|
||||||
|
def test_openai_example_tool_targets_vm_run() -> None:
|
||||||
|
module = _load_openai_example_module()
|
||||||
|
assert module.OPENAI_VM_RUN_TOOL["name"] == "vm_run"
|
||||||
|
assert module.OPENAI_VM_RUN_TOOL["type"] == "function"
|
||||||
|
assert module.OPENAI_VM_RUN_TOOL["strict"] is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_openai_example_runs_function_call_loop(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||||
|
module = _load_openai_example_module()
|
||||||
|
tool_call = SimpleNamespace(
|
||||||
|
type="function_call",
|
||||||
|
name="vm_run",
|
||||||
|
call_id="call_123",
|
||||||
|
arguments=(
|
||||||
|
'{"profile":"debian-git","command":"git --version",'
|
||||||
|
'"vcpu_count":1,"mem_mib":1024}'
|
||||||
|
),
|
||||||
|
)
|
||||||
|
responses = [
|
||||||
|
SimpleNamespace(output=[tool_call], output_text=""),
|
||||||
|
SimpleNamespace(output=[], output_text="git version 2.40.1"),
|
||||||
|
]
|
||||||
|
calls: list[dict[str, Any]] = []
|
||||||
|
|
||||||
|
class FakeResponses:
|
||||||
|
def create(self, **kwargs: Any) -> Any:
|
||||||
|
calls.append(kwargs)
|
||||||
|
return responses.pop(0)
|
||||||
|
|
||||||
|
class FakeOpenAI:
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.responses = FakeResponses()
|
||||||
|
|
||||||
|
fake_openai_module = ModuleType("openai")
|
||||||
|
cast(Any, fake_openai_module).OpenAI = FakeOpenAI
|
||||||
|
monkeypatch.setitem(sys.modules, "openai", fake_openai_module)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
module,
|
||||||
|
"call_vm_run",
|
||||||
|
lambda arguments: {"exit_code": 0, "stdout": f"ran {arguments['command']}"},
|
||||||
|
)
|
||||||
|
|
||||||
|
result = module.run_openai_vm_run_example(prompt="run git --version")
|
||||||
|
|
||||||
|
assert result == "git version 2.40.1"
|
||||||
|
assert calls[0]["tools"][0]["name"] == "vm_run"
|
||||||
|
second_input = calls[1]["input"]
|
||||||
|
assert second_input[-1]["type"] == "function_call_output"
|
||||||
|
assert second_input[-1]["call_id"] == "call_123"
|
||||||
Loading…
Add table
Add a link
Reference in a new issue