Document the post-3.1 milestones needed to make the stable workspace product feel natural in chat-driven LLM interfaces. Add a follow-on roadmap for model-native file ops, workspace naming and discovery, tool profiles, shell output cleanup, and use-case recipes with smoke coverage. Link it from the README, vision doc, and completed workspace GA roadmap so the next phase is explicit. Keep the sequence anchored to the workspace-first vision and continue to treat disk tools as secondary rather than the main chat-facing surface.
51 lines
1.7 KiB
Markdown
51 lines
1.7 KiB
Markdown
# `3.4.0` Tool Profiles And Canonical Chat Flows
|
|
|
|
Status: Planned
|
|
|
|
## Goal
|
|
|
|
Make the model-facing surface intentionally small for chat hosts, while keeping
|
|
the full workspace product available when needed.
|
|
|
|
## Public API Changes
|
|
|
|
Planned additions:
|
|
|
|
- `pyro mcp serve --profile {vm-run,workspace-core,workspace-full}`
|
|
- matching Python SDK and server factory configuration for the same profiles
|
|
- one canonical OpenAI Responses example that uses the workspace-core profile
|
|
- one canonical MCP/chat example that uses the same profile progression
|
|
|
|
Representative profile intent:
|
|
|
|
- `vm-run`: one-shot only
|
|
- `workspace-core`: create, status, exec, file ops, diff, reset, export, delete
|
|
- `workspace-full`: shells, services, snapshots, secrets, network policy, and
|
|
the rest of the stable workspace surface
|
|
|
|
## Implementation Boundaries
|
|
|
|
- keep the current full surface available for advanced users
|
|
- add profiles as an exposure control, not as a second product line
|
|
- make profile behavior explicit in docs and help text
|
|
- keep profile names stable once shipped
|
|
|
|
## Non-Goals
|
|
|
|
- no framework-specific wrappers inside the core package
|
|
- no server-side planner that chooses tools on the model's behalf
|
|
- no hidden feature gating by provider or client
|
|
|
|
## Acceptance Scenarios
|
|
|
|
- a chat host can expose only `vm_run` for one-shot work
|
|
- a chat host can promote the same agent to `workspace-core` without suddenly
|
|
dumping the full advanced surface on the model
|
|
- a new integrator can copy one example and understand the intended progression
|
|
from one-shot to stable workspace
|
|
|
|
## Required Repo Updates
|
|
|
|
- integration docs that explain when to use each profile
|
|
- canonical chat examples for both provider tool calling and MCP
|
|
- smoke coverage for at least one profile-limited chat loop
|