Expose stable MCP/server tool profiles so chat hosts can start narrow and widen only when needed. This adds vm-run, workspace-core, and workspace-full across the CLI serve path, Pyro.create_server(), and the package-level create_server() factory while keeping workspace-full as the default. Register profile-specific tool sets from one shared contract mapping, and narrow the workspace-core schemas so secrets, network policy, shells, services, snapshots, and disk tools do not leak into the default persistent chat profile. The full surface remains available unchanged under workspace-full. Refresh the public docs and examples around the profile progression, add a canonical OpenAI Responses workspace-core example, mark the 3.4.0 roadmap milestone done, and verify with uv lock, UV_CACHE_DIR=.uv-cache make check, UV_CACHE_DIR=.uv-cache make dist-check, and a real guest-backed workspace-core smoke for create, file write, exec, diff, export, reset, and delete.
1.7 KiB
1.7 KiB
3.4.0 Tool Profiles And Canonical Chat Flows
Status: Done
Goal
Make the model-facing surface intentionally small for chat hosts, while keeping the full workspace product available when needed.
Public API Changes
Planned additions:
pyro mcp serve --profile {vm-run,workspace-core,workspace-full}- matching Python SDK and server factory configuration for the same profiles
- one canonical OpenAI Responses example that uses the workspace-core profile
- one canonical MCP/chat example that uses the same profile progression
Representative profile intent:
vm-run: one-shot onlyworkspace-core: create, status, exec, file ops, diff, reset, export, deleteworkspace-full: shells, services, snapshots, secrets, network policy, and the rest of the stable workspace surface
Implementation Boundaries
- keep the current full surface available for advanced users
- add profiles as an exposure control, not as a second product line
- make profile behavior explicit in docs and help text
- keep profile names stable once shipped
Non-Goals
- no framework-specific wrappers inside the core package
- no server-side planner that chooses tools on the model's behalf
- no hidden feature gating by provider or client
Acceptance Scenarios
- a chat host can expose only
vm_runfor one-shot work - a chat host can promote the same agent to
workspace-corewithout suddenly dumping the full advanced surface on the model - a new integrator can copy one example and understand the intended progression from one-shot to stable workspace
Required Repo Updates
- integration docs that explain when to use each profile
- canonical chat examples for both provider tool calling and MCP
- smoke coverage for at least one profile-limited chat loop