# `3.4.0` Tool Profiles And Canonical Chat Flows Status: Planned ## Goal Make the model-facing surface intentionally small for chat hosts, while keeping the full workspace product available when needed. ## Public API Changes Planned additions: - `pyro mcp serve --profile {vm-run,workspace-core,workspace-full}` - matching Python SDK and server factory configuration for the same profiles - one canonical OpenAI Responses example that uses the workspace-core profile - one canonical MCP/chat example that uses the same profile progression Representative profile intent: - `vm-run`: one-shot only - `workspace-core`: create, status, exec, file ops, diff, reset, export, delete - `workspace-full`: shells, services, snapshots, secrets, network policy, and the rest of the stable workspace surface ## Implementation Boundaries - keep the current full surface available for advanced users - add profiles as an exposure control, not as a second product line - make profile behavior explicit in docs and help text - keep profile names stable once shipped ## Non-Goals - no framework-specific wrappers inside the core package - no server-side planner that chooses tools on the model's behalf - no hidden feature gating by provider or client ## Acceptance Scenarios - a chat host can expose only `vm_run` for one-shot work - a chat host can promote the same agent to `workspace-core` without suddenly dumping the full advanced surface on the model - a new integrator can copy one example and understand the intended progression from one-shot to stable workspace ## Required Repo Updates - integration docs that explain when to use each profile - canonical chat examples for both provider tool calling and MCP - smoke coverage for at least one profile-limited chat loop