Make the recommended MCP profile visible from the first help and docs pass without changing 3.x behavior. Rework help, top-level docs, public-contract wording, and shipped MCP/OpenAI examples so is the recommended first profile while stays the compatibility default for full-surface hosts. Bump the package and catalog to 3.8.0, mark the roadmap milestone done, and add regression coverage for the new MCP help and docs alignment. Validation included uv lock, targeted profile/help tests, make check, make dist-check, and a real guest-backed server smoke.
7 KiB
Integration Targets
These are the main ways to integrate pyro-mcp into an LLM application.
Use this page after you have already validated the host and guest execution through the CLI path in install.md or first-run.md.
Recommended Default
Start most chat hosts with workspace-core. Use vm_run only for one-shot
integrations, and promote the chat surface to workspace-full only when it
truly needs shells, services, snapshots, secrets, network policy, or disk
tools.
That keeps the model-facing contract small:
- one tool
- one command
- one ephemeral VM
- automatic cleanup
Profile progression:
workspace-core: recommended first profile for persistent chat editingvm-run: one-shot onlyworkspace-full: the full stable workspace surface, including shells, services, snapshots, secrets, network policy, and disk tools
OpenAI Responses API
Best when:
- your agent already uses OpenAI models directly
- you want a normal tool-calling loop instead of MCP transport
- you want the smallest amount of integration code
Recommended surface:
vm_runfor one-shot loops- the
workspace-coretool set for the normal persistent chat loop - the
workspace-fulltool set only when the host explicitly needs advanced workspace capabilities
Canonical example:
- examples/openai_responses_vm_run.py
- examples/openai_responses_workspace_core.py
- docs/use-cases/repro-fix-loop.md
MCP Clients
Best when:
- your host application already supports MCP
- you want
pyroto run as an external stdio server - you want tool schemas to be discovered directly from the server
Recommended entrypoint:
pyro mcp serve --profile workspace-core
Profile progression:
pyro mcp serve --profile vm-runfor the smallest one-shot surfacepyro mcp serve --profile workspace-corefor the normal persistent chat looppyro mcp serve --profile workspace-fullonly when the model truly needs advanced workspace tools
Starter config:
- examples/mcp_client_config.md
- examples/claude_desktop_mcp_config.json
- examples/cursor_mcp_config.json
- docs/use-cases/README.md
Direct Python SDK
Best when:
- your application owns orchestration itself
- you do not need MCP transport
- you want direct access to
Pyro
Recommended default:
Pyro.run_in_vm(...)Pyro.create_server(profile="workspace-core")for most chat hostsPyro.create_workspace(name=..., labels=...)+Pyro.list_workspaces()+Pyro.update_workspace(...)when repeated workspaces need human-friendly discovery metadataPyro.create_workspace(seed_path=...)+Pyro.push_workspace_sync(...)+Pyro.exec_workspace(...)when repeated workspace commands are requiredPyro.list_workspace_files(...)/Pyro.read_workspace_file(...)/Pyro.write_workspace_file(...)/Pyro.apply_workspace_patch(...)when the agent needs model-native file inspection and text edits inside one live workspacePyro.create_workspace(..., secrets=...)+Pyro.exec_workspace(..., secret_env=...)when the workspace needs private tokens or authenticated setupPyro.create_workspace(..., network_policy="egress+published-ports")+Pyro.start_service(..., published_ports=[...])when the host must probe one workspace servicePyro.diff_workspace(...)+Pyro.export_workspace(...)when the agent needs baseline comparison or host-out file transferPyro.start_service(..., secret_env=...)+Pyro.list_services(...)+Pyro.logs_service(...)when the agent needs long-running background processes in one workspacePyro.open_shell(..., secret_env=...)+Pyro.write_shell(...)+Pyro.read_shell(..., plain=True, wait_for_idle_ms=300)when the agent needs an interactive PTY inside the workspace
Lifecycle note:
Pyro.exec_vm(...)runs one command and auto-cleans the VM afterward- use
create_vm(...)+start_vm(...)only when you need pre-exec inspection or status before that final exec - use
create_workspace(seed_path=...)when the agent needs repeated commands in one persistent/workspacethat starts from host content - use
create_workspace(name=..., labels=...),list_workspaces(), andupdate_workspace(...)when the agent or operator needs to rediscover the right workspace later without external notes - use
push_workspace_sync(...)when later host-side changes need to be imported into that running workspace without recreating it - use
list_workspace_files(...),read_workspace_file(...),write_workspace_file(...), andapply_workspace_patch(...)when the agent should inspect or edit workspace files without shell quoting tricks - use
create_workspace(..., secrets=...)plussecret_envon exec, shell, or service start when the agent needs private tokens or authenticated startup inside that workspace - use
create_workspace(..., network_policy="egress+published-ports")plusstart_service(..., published_ports=[...])when the host must probe one service from that workspace - use
diff_workspace(...)when the agent needs a structured comparison against the immutable create-time baseline - use
export_workspace(...)when the agent needs one file or directory copied back to the host - use
stop_workspace(...)pluslist_workspace_disk(...),read_workspace_disk(...), orexport_workspace_disk(...)when the agent needs offline inspection or one raw ext4 copy from a stopped guest-backed workspace - use
start_service(...)when the agent needs long-running processes and typed readiness inside one workspace - use
open_shell(...)when the agent needs interactive shell state instead of one-shot execs
Examples:
- examples/python_run.py
- examples/python_lifecycle.py
- examples/python_workspace.py
- examples/python_shell.py
- docs/use-cases/README.md
Agent Framework Wrappers
Examples:
- LangChain tools
- PydanticAI tools
- custom in-house orchestration layers
Best when:
- you already have an application framework that expects a Python callable tool
- you want to wrap
vm_runbehind framework-specific abstractions
Recommended pattern:
- keep the framework wrapper thin
- map one-shot framework tool input directly onto
vm_run - expose
workspace_*only when the framework truly needs repeated commands in one workspace
Concrete example:
Selection Rule
Choose the narrowest integration that matches the host environment:
- OpenAI Responses API if you want a direct provider tool loop.
- MCP if your host already speaks MCP.
- Python SDK if you own orchestration and do not need transport.
- Framework wrappers only as thin adapters over the same
vm_runcontract.