Fix the default one-shot install path so empty bundled profile directories no longer win over OCI-backed environment pulls or leave broken cached symlinks behind. Treat cached installs as valid only when the manifest and boot artifacts are all present, repair invalid installs on the next pull, and add human-mode phase markers for env pull and run without changing JSON output. Align the Python lifecycle example and public docs with the current exec_vm/vm_exec auto-clean semantics, and validate the slice with focused pytest coverage, make check, make dist-check, and a real default-path pull/inspect/run smoke.
108 lines
2.7 KiB
Markdown
108 lines
2.7 KiB
Markdown
# Integration Targets
|
|
|
|
These are the main ways to integrate `pyro-mcp` into an LLM application.
|
|
|
|
Use this page after you have already validated the host and guest execution through the
|
|
CLI path in [install.md](install.md) or [first-run.md](first-run.md).
|
|
|
|
## Recommended Default
|
|
|
|
Use `vm_run` first.
|
|
|
|
That keeps the model-facing contract small:
|
|
|
|
- one tool
|
|
- one command
|
|
- one ephemeral VM
|
|
- automatic cleanup
|
|
|
|
Only move to lifecycle tools when the agent truly needs VM state across multiple calls.
|
|
|
|
## OpenAI Responses API
|
|
|
|
Best when:
|
|
|
|
- your agent already uses OpenAI models directly
|
|
- you want a normal tool-calling loop instead of MCP transport
|
|
- you want the smallest amount of integration code
|
|
|
|
Recommended surface:
|
|
|
|
- `vm_run`
|
|
|
|
Canonical example:
|
|
|
|
- [examples/openai_responses_vm_run.py](../examples/openai_responses_vm_run.py)
|
|
|
|
## MCP Clients
|
|
|
|
Best when:
|
|
|
|
- your host application already supports MCP
|
|
- you want `pyro` to run as an external stdio server
|
|
- you want tool schemas to be discovered directly from the server
|
|
|
|
Recommended entrypoint:
|
|
|
|
- `pyro mcp serve`
|
|
|
|
Starter config:
|
|
|
|
- [examples/mcp_client_config.md](../examples/mcp_client_config.md)
|
|
- [examples/claude_desktop_mcp_config.json](../examples/claude_desktop_mcp_config.json)
|
|
- [examples/cursor_mcp_config.json](../examples/cursor_mcp_config.json)
|
|
|
|
## Direct Python SDK
|
|
|
|
Best when:
|
|
|
|
- your application owns orchestration itself
|
|
- you do not need MCP transport
|
|
- you want direct access to `Pyro`
|
|
|
|
Recommended default:
|
|
|
|
- `Pyro.run_in_vm(...)`
|
|
|
|
Lifecycle note:
|
|
|
|
- `Pyro.exec_vm(...)` runs one command and auto-cleans the VM afterward
|
|
- use `create_vm(...)` + `start_vm(...)` only when you need pre-exec inspection or status before
|
|
that final exec
|
|
|
|
Examples:
|
|
|
|
- [examples/python_run.py](../examples/python_run.py)
|
|
- [examples/python_lifecycle.py](../examples/python_lifecycle.py)
|
|
|
|
## Agent Framework Wrappers
|
|
|
|
Examples:
|
|
|
|
- LangChain tools
|
|
- PydanticAI tools
|
|
- custom in-house orchestration layers
|
|
|
|
Best when:
|
|
|
|
- you already have an application framework that expects a Python callable tool
|
|
- you want to wrap `vm_run` behind framework-specific abstractions
|
|
|
|
Recommended pattern:
|
|
|
|
- keep the framework wrapper thin
|
|
- map framework tool input directly onto `vm_run`
|
|
- avoid exposing lifecycle tools unless the framework truly needs them
|
|
|
|
Concrete example:
|
|
|
|
- [examples/langchain_vm_run.py](../examples/langchain_vm_run.py)
|
|
|
|
## Selection Rule
|
|
|
|
Choose the narrowest integration that matches the host environment:
|
|
|
|
1. OpenAI Responses API if you want a direct provider tool loop.
|
|
2. MCP if your host already speaks MCP.
|
|
3. Python SDK if you own orchestration and do not need transport.
|
|
4. Framework wrappers only as thin adapters over the same `vm_run` contract.
|