Add chat-first workspace roadmap

Document the post-3.1 milestones needed to make the stable workspace product feel natural in chat-driven LLM interfaces.

Add a follow-on roadmap for model-native file ops, workspace naming and discovery, tool profiles, shell output cleanup, and use-case recipes with smoke coverage. Link it from the README, vision doc, and completed workspace GA roadmap so the next phase is explicit.

Keep the sequence anchored to the workspace-first vision and continue to treat disk tools as secondary rather than the main chat-facing surface.
This commit is contained in:
Thales Maciel 2026-03-12 21:06:14 -03:00
parent 287f6d100f
commit dbb71a3174
No known key found for this signature in database
GPG key ID: 33112E6833C34679
9 changed files with 326 additions and 4 deletions

View file

@ -0,0 +1,51 @@
# `3.4.0` Tool Profiles And Canonical Chat Flows
Status: Planned
## Goal
Make the model-facing surface intentionally small for chat hosts, while keeping
the full workspace product available when needed.
## Public API Changes
Planned additions:
- `pyro mcp serve --profile {vm-run,workspace-core,workspace-full}`
- matching Python SDK and server factory configuration for the same profiles
- one canonical OpenAI Responses example that uses the workspace-core profile
- one canonical MCP/chat example that uses the same profile progression
Representative profile intent:
- `vm-run`: one-shot only
- `workspace-core`: create, status, exec, file ops, diff, reset, export, delete
- `workspace-full`: shells, services, snapshots, secrets, network policy, and
the rest of the stable workspace surface
## Implementation Boundaries
- keep the current full surface available for advanced users
- add profiles as an exposure control, not as a second product line
- make profile behavior explicit in docs and help text
- keep profile names stable once shipped
## Non-Goals
- no framework-specific wrappers inside the core package
- no server-side planner that chooses tools on the model's behalf
- no hidden feature gating by provider or client
## Acceptance Scenarios
- a chat host can expose only `vm_run` for one-shot work
- a chat host can promote the same agent to `workspace-core` without suddenly
dumping the full advanced surface on the model
- a new integrator can copy one example and understand the intended progression
from one-shot to stable workspace
## Required Repo Updates
- integration docs that explain when to use each profile
- canonical chat examples for both provider tool calling and MCP
- smoke coverage for at least one profile-limited chat loop