Current persistent tasks started with an empty workspace, which blocked the first useful host-to-task workflow in the task roadmap. This change lets task creation start from a host directory or tar archive without changing the one-shot VM surfaces. Expose source_path on task create across the CLI, SDK, and MCP, add safe archive upload and extraction support for guest and host-compat backends, persist workspace_seed metadata, and patch the per-task rootfs with the bundled guest agent before boot so seeded guest tasks work without republishing environments. Also switch post--- command reconstruction to shlex.join() so documented sh -lc task examples preserve argument boundaries. Validation: - uv lock - UV_CACHE_DIR=.uv-cache uv run pytest --no-cov tests/test_vm_guest.py tests/test_vm_manager.py tests/test_cli.py tests/test_api.py tests/test_server.py tests/test_public_contract.py - UV_CACHE_DIR=.uv-cache make check - UV_CACHE_DIR=.uv-cache make dist-check - real guest-backed smoke: task create --source-path, task exec -- cat note.txt, task delete
3.1 KiB
3.1 KiB
Integration Targets
These are the main ways to integrate pyro-mcp into an LLM application.
Use this page after you have already validated the host and guest execution through the CLI path in install.md or first-run.md.
Recommended Default
Use vm_run first for one-shot commands.
That keeps the model-facing contract small:
- one tool
- one command
- one ephemeral VM
- automatic cleanup
Move to task_* only when the agent truly needs repeated commands in one workspace across
multiple calls.
OpenAI Responses API
Best when:
- your agent already uses OpenAI models directly
- you want a normal tool-calling loop instead of MCP transport
- you want the smallest amount of integration code
Recommended surface:
vm_runtask_create(source_path=...)+task_execwhen the agent needs persistent workspace state
Canonical example:
MCP Clients
Best when:
- your host application already supports MCP
- you want
pyroto run as an external stdio server - you want tool schemas to be discovered directly from the server
Recommended entrypoint:
pyro mcp serve
Starter config:
- examples/mcp_client_config.md
- examples/claude_desktop_mcp_config.json
- examples/cursor_mcp_config.json
Direct Python SDK
Best when:
- your application owns orchestration itself
- you do not need MCP transport
- you want direct access to
Pyro
Recommended default:
Pyro.run_in_vm(...)Pyro.create_task(source_path=...)+Pyro.exec_task(...)when repeated workspace commands are required
Lifecycle note:
Pyro.exec_vm(...)runs one command and auto-cleans the VM afterward- use
create_vm(...)+start_vm(...)only when you need pre-exec inspection or status before that final exec - use
create_task(source_path=...)when the agent needs repeated commands in one persistent/workspacethat starts from host content
Examples:
Agent Framework Wrappers
Examples:
- LangChain tools
- PydanticAI tools
- custom in-house orchestration layers
Best when:
- you already have an application framework that expects a Python callable tool
- you want to wrap
vm_runbehind framework-specific abstractions
Recommended pattern:
- keep the framework wrapper thin
- map one-shot framework tool input directly onto
vm_run - expose
task_*only when the framework truly needs repeated commands in one workspace
Concrete example:
Selection Rule
Choose the narrowest integration that matches the host environment:
- OpenAI Responses API if you want a direct provider tool loop.
- MCP if your host already speaks MCP.
- Python SDK if you own orchestration and do not need transport.
- Framework wrappers only as thin adapters over the same
vm_runcontract.