pyro-mcp/docs/integrations.md
Thales Maciel 3f8293ad24 Add persistent workspace shell sessions
Let agents inhabit a workspace across separate calls instead of only submitting one-shot execs.

Add workspace shell open/read/write/signal/close across the CLI, Python SDK, and MCP server, with persisted shell records, a local PTY-backed mock implementation, and guest-agent support for real Firecracker workspaces.

Mark the 2.5.0 roadmap milestone done, refresh docs/examples and the release metadata, and verify with uv lock, UV_CACHE_DIR=.uv-cache make check, and UV_CACHE_DIR=.uv-cache make dist-check.
2026-03-12 02:31:57 -03:00

3.7 KiB

Integration Targets

These are the main ways to integrate pyro-mcp into an LLM application.

Use this page after you have already validated the host and guest execution through the CLI path in install.md or first-run.md.

Use vm_run first for one-shot commands.

That keeps the model-facing contract small:

  • one tool
  • one command
  • one ephemeral VM
  • automatic cleanup

Move to workspace_* only when the agent truly needs repeated commands in one workspace across multiple calls.

OpenAI Responses API

Best when:

  • your agent already uses OpenAI models directly
  • you want a normal tool-calling loop instead of MCP transport
  • you want the smallest amount of integration code

Recommended surface:

  • vm_run
  • workspace_create(seed_path=...) + workspace_sync_push + workspace_exec when the agent needs persistent workspace state
  • open_shell / read_shell / write_shell when the agent needs an interactive PTY inside that workspace

Canonical example:

MCP Clients

Best when:

  • your host application already supports MCP
  • you want pyro to run as an external stdio server
  • you want tool schemas to be discovered directly from the server

Recommended entrypoint:

  • pyro mcp serve

Starter config:

Direct Python SDK

Best when:

  • your application owns orchestration itself
  • you do not need MCP transport
  • you want direct access to Pyro

Recommended default:

  • Pyro.run_in_vm(...)
  • Pyro.create_workspace(seed_path=...) + Pyro.push_workspace_sync(...) + Pyro.exec_workspace(...) when repeated workspace commands are required
  • Pyro.open_shell(...) + Pyro.write_shell(...) + Pyro.read_shell(...) when the agent needs an interactive PTY inside the workspace

Lifecycle note:

  • Pyro.exec_vm(...) runs one command and auto-cleans the VM afterward
  • use create_vm(...) + start_vm(...) only when you need pre-exec inspection or status before that final exec
  • use create_workspace(seed_path=...) when the agent needs repeated commands in one persistent /workspace that starts from host content
  • use push_workspace_sync(...) when later host-side changes need to be imported into that running workspace without recreating it
  • use open_shell(...) when the agent needs interactive shell state instead of one-shot execs

Examples:

Agent Framework Wrappers

Examples:

  • LangChain tools
  • PydanticAI tools
  • custom in-house orchestration layers

Best when:

  • you already have an application framework that expects a Python callable tool
  • you want to wrap vm_run behind framework-specific abstractions

Recommended pattern:

  • keep the framework wrapper thin
  • map one-shot framework tool input directly onto vm_run
  • expose workspace_* only when the framework truly needs repeated commands in one workspace

Concrete example:

Selection Rule

Choose the narrowest integration that matches the host environment:

  1. OpenAI Responses API if you want a direct provider tool loop.
  2. MCP if your host already speaks MCP.
  3. Python SDK if you own orchestration and do not need transport.
  4. Framework wrappers only as thin adapters over the same vm_run contract.