Make persistent workspaces capable of running long-lived background processes instead of forcing everything through one-shot exec calls. Add workspace service start/list/status/logs/stop across the CLI, Python SDK, and MCP server, with multiple named services per workspace, typed readiness probes (file, tcp, http, and command), and aggregate service counts on workspace status. Keep service state and logs outside /workspace so diff and export semantics stay workspace-scoped, and extend the guest agent plus backends to persist service records and logs across separate calls. Update the 2.7.0 docs, examples, changelog, and roadmap milestone to reflect the shipped surface. Validation: uv lock; UV_CACHE_DIR=.uv-cache make check; UV_CACHE_DIR=.uv-cache make dist-check; real guest-backed Firecracker smoke for workspace create, two service starts, list/status/logs, diff unaffected, stop, and delete.
4.6 KiB
4.6 KiB
Integration Targets
These are the main ways to integrate pyro-mcp into an LLM application.
Use this page after you have already validated the host and guest execution through the CLI path in install.md or first-run.md.
Recommended Default
Use vm_run first for one-shot commands.
That keeps the model-facing contract small:
- one tool
- one command
- one ephemeral VM
- automatic cleanup
Move to workspace_* only when the agent truly needs repeated commands in one workspace across
multiple calls.
OpenAI Responses API
Best when:
- your agent already uses OpenAI models directly
- you want a normal tool-calling loop instead of MCP transport
- you want the smallest amount of integration code
Recommended surface:
vm_runworkspace_create(seed_path=...)+workspace_sync_push+workspace_execwhen the agent needs persistent workspace stateworkspace_diff+workspace_exportwhen the agent needs explicit baseline comparison or host-out file transferstart_service/list_services/status_service/logs_service/stop_servicewhen the agent needs long-running processes inside that workspaceopen_shell/read_shell/write_shellwhen the agent needs an interactive PTY inside that workspace
Canonical example:
MCP Clients
Best when:
- your host application already supports MCP
- you want
pyroto run as an external stdio server - you want tool schemas to be discovered directly from the server
Recommended entrypoint:
pyro mcp serve
Starter config:
- examples/mcp_client_config.md
- examples/claude_desktop_mcp_config.json
- examples/cursor_mcp_config.json
Direct Python SDK
Best when:
- your application owns orchestration itself
- you do not need MCP transport
- you want direct access to
Pyro
Recommended default:
Pyro.run_in_vm(...)Pyro.create_workspace(seed_path=...)+Pyro.push_workspace_sync(...)+Pyro.exec_workspace(...)when repeated workspace commands are requiredPyro.diff_workspace(...)+Pyro.export_workspace(...)when the agent needs baseline comparison or host-out file transferPyro.start_service(...)+Pyro.list_services(...)+Pyro.logs_service(...)when the agent needs long-running background processes in one workspacePyro.open_shell(...)+Pyro.write_shell(...)+Pyro.read_shell(...)when the agent needs an interactive PTY inside the workspace
Lifecycle note:
Pyro.exec_vm(...)runs one command and auto-cleans the VM afterward- use
create_vm(...)+start_vm(...)only when you need pre-exec inspection or status before that final exec - use
create_workspace(seed_path=...)when the agent needs repeated commands in one persistent/workspacethat starts from host content - use
push_workspace_sync(...)when later host-side changes need to be imported into that running workspace without recreating it - use
diff_workspace(...)when the agent needs a structured comparison against the immutable create-time baseline - use
export_workspace(...)when the agent needs one file or directory copied back to the host - use
start_service(...)when the agent needs long-running processes and typed readiness inside one workspace - use
open_shell(...)when the agent needs interactive shell state instead of one-shot execs
Examples:
- examples/python_run.py
- examples/python_lifecycle.py
- examples/python_workspace.py
- examples/python_shell.py
Agent Framework Wrappers
Examples:
- LangChain tools
- PydanticAI tools
- custom in-house orchestration layers
Best when:
- you already have an application framework that expects a Python callable tool
- you want to wrap
vm_runbehind framework-specific abstractions
Recommended pattern:
- keep the framework wrapper thin
- map one-shot framework tool input directly onto
vm_run - expose
workspace_*only when the framework truly needs repeated commands in one workspace
Concrete example:
Selection Rule
Choose the narrowest integration that matches the host environment:
- OpenAI Responses API if you want a direct provider tool loop.
- MCP if your host already speaks MCP.
- Python SDK if you own orchestration and do not need transport.
- Framework wrappers only as thin adapters over the same
vm_runcontract.