# Integration Targets These are the main ways to integrate `pyro-mcp` into an LLM application. Use this page after you have already validated the host and guest execution through the CLI path in [install.md](install.md) or [first-run.md](first-run.md). ## Recommended Default Use `vm_run` first for one-shot commands. That keeps the model-facing contract small: - one tool - one command - one ephemeral VM - automatic cleanup Move to `workspace_*` only when the agent truly needs repeated commands in one workspace across multiple calls. ## OpenAI Responses API Best when: - your agent already uses OpenAI models directly - you want a normal tool-calling loop instead of MCP transport - you want the smallest amount of integration code Recommended surface: - `vm_run` - `workspace_create(seed_path=...)` + `workspace_sync_push` + `workspace_exec` when the agent needs persistent workspace state - `open_shell` / `read_shell` / `write_shell` when the agent needs an interactive PTY inside that workspace Canonical example: - [examples/openai_responses_vm_run.py](../examples/openai_responses_vm_run.py) ## MCP Clients Best when: - your host application already supports MCP - you want `pyro` to run as an external stdio server - you want tool schemas to be discovered directly from the server Recommended entrypoint: - `pyro mcp serve` Starter config: - [examples/mcp_client_config.md](../examples/mcp_client_config.md) - [examples/claude_desktop_mcp_config.json](../examples/claude_desktop_mcp_config.json) - [examples/cursor_mcp_config.json](../examples/cursor_mcp_config.json) ## Direct Python SDK Best when: - your application owns orchestration itself - you do not need MCP transport - you want direct access to `Pyro` Recommended default: - `Pyro.run_in_vm(...)` - `Pyro.create_workspace(seed_path=...)` + `Pyro.push_workspace_sync(...)` + `Pyro.exec_workspace(...)` when repeated workspace commands are required - `Pyro.open_shell(...)` + `Pyro.write_shell(...)` + `Pyro.read_shell(...)` when the agent needs an interactive PTY inside the workspace Lifecycle note: - `Pyro.exec_vm(...)` runs one command and auto-cleans the VM afterward - use `create_vm(...)` + `start_vm(...)` only when you need pre-exec inspection or status before that final exec - use `create_workspace(seed_path=...)` when the agent needs repeated commands in one persistent `/workspace` that starts from host content - use `push_workspace_sync(...)` when later host-side changes need to be imported into that running workspace without recreating it - use `open_shell(...)` when the agent needs interactive shell state instead of one-shot execs Examples: - [examples/python_run.py](../examples/python_run.py) - [examples/python_lifecycle.py](../examples/python_lifecycle.py) - [examples/python_workspace.py](../examples/python_workspace.py) - [examples/python_shell.py](../examples/python_shell.py) ## Agent Framework Wrappers Examples: - LangChain tools - PydanticAI tools - custom in-house orchestration layers Best when: - you already have an application framework that expects a Python callable tool - you want to wrap `vm_run` behind framework-specific abstractions Recommended pattern: - keep the framework wrapper thin - map one-shot framework tool input directly onto `vm_run` - expose `workspace_*` only when the framework truly needs repeated commands in one workspace Concrete example: - [examples/langchain_vm_run.py](../examples/langchain_vm_run.py) ## Selection Rule Choose the narrowest integration that matches the host environment: 1. OpenAI Responses API if you want a direct provider tool loop. 2. MCP if your host already speaks MCP. 3. Python SDK if you own orchestration and do not need transport. 4. Framework wrappers only as thin adapters over the same `vm_run` contract.