pyro-mcp/docs/integrations.md
Thales Maciel aa886b346e Add seeded task workspace creation
Current persistent tasks started with an empty workspace, which blocked the first useful host-to-task workflow in the task roadmap. This change lets task creation start from a host directory or tar archive without changing the one-shot VM surfaces.

Expose source_path on task create across the CLI, SDK, and MCP, add safe archive upload and extraction support for guest and host-compat backends, persist workspace_seed metadata, and patch the per-task rootfs with the bundled guest agent before boot so seeded guest tasks work without republishing environments. Also switch post--- command reconstruction to shlex.join() so documented sh -lc task examples preserve argument boundaries.

Validation:
- uv lock
- UV_CACHE_DIR=.uv-cache uv run pytest --no-cov tests/test_vm_guest.py tests/test_vm_manager.py tests/test_cli.py tests/test_api.py tests/test_server.py tests/test_public_contract.py
- UV_CACHE_DIR=.uv-cache make check
- UV_CACHE_DIR=.uv-cache make dist-check
- real guest-backed smoke: task create --source-path, task exec -- cat note.txt, task delete
2026-03-11 21:45:38 -03:00

3.1 KiB

Integration Targets

These are the main ways to integrate pyro-mcp into an LLM application.

Use this page after you have already validated the host and guest execution through the CLI path in install.md or first-run.md.

Use vm_run first for one-shot commands.

That keeps the model-facing contract small:

  • one tool
  • one command
  • one ephemeral VM
  • automatic cleanup

Move to task_* only when the agent truly needs repeated commands in one workspace across multiple calls.

OpenAI Responses API

Best when:

  • your agent already uses OpenAI models directly
  • you want a normal tool-calling loop instead of MCP transport
  • you want the smallest amount of integration code

Recommended surface:

  • vm_run
  • task_create(source_path=...) + task_exec when the agent needs persistent workspace state

Canonical example:

MCP Clients

Best when:

  • your host application already supports MCP
  • you want pyro to run as an external stdio server
  • you want tool schemas to be discovered directly from the server

Recommended entrypoint:

  • pyro mcp serve

Starter config:

Direct Python SDK

Best when:

  • your application owns orchestration itself
  • you do not need MCP transport
  • you want direct access to Pyro

Recommended default:

  • Pyro.run_in_vm(...)
  • Pyro.create_task(source_path=...) + Pyro.exec_task(...) when repeated workspace commands are required

Lifecycle note:

  • Pyro.exec_vm(...) runs one command and auto-cleans the VM afterward
  • use create_vm(...) + start_vm(...) only when you need pre-exec inspection or status before that final exec
  • use create_task(source_path=...) when the agent needs repeated commands in one persistent /workspace that starts from host content

Examples:

Agent Framework Wrappers

Examples:

  • LangChain tools
  • PydanticAI tools
  • custom in-house orchestration layers

Best when:

  • you already have an application framework that expects a Python callable tool
  • you want to wrap vm_run behind framework-specific abstractions

Recommended pattern:

  • keep the framework wrapper thin
  • map one-shot framework tool input directly onto vm_run
  • expose task_* only when the framework truly needs repeated commands in one workspace

Concrete example:

Selection Rule

Choose the narrowest integration that matches the host environment:

  1. OpenAI Responses API if you want a direct provider tool loop.
  2. MCP if your host already speaks MCP.
  3. Python SDK if you own orchestration and do not need transport.
  4. Framework wrappers only as thin adapters over the same vm_run contract.