Document the post-3.1 milestones needed to make the stable workspace product feel natural in chat-driven LLM interfaces. Add a follow-on roadmap for model-native file ops, workspace naming and discovery, tool profiles, shell output cleanup, and use-case recipes with smoke coverage. Link it from the README, vision doc, and completed workspace GA roadmap so the next phase is explicit. Keep the sequence anchored to the workspace-first vision and continue to treat disk tools as secondary rather than the main chat-facing surface.
66 lines
2.7 KiB
Markdown
66 lines
2.7 KiB
Markdown
# LLM Chat Ergonomics Roadmap
|
|
|
|
This roadmap picks up after the completed workspace GA plan and focuses on one
|
|
goal:
|
|
|
|
make the core agent-workspace use cases feel trivial from a chat-driven LLM
|
|
interface.
|
|
|
|
Current baseline is `3.1.0`:
|
|
|
|
- the stable workspace contract exists across CLI, SDK, and MCP
|
|
- one-shot `pyro run` still exists as the narrow entrypoint
|
|
- workspaces already support seeding, sync push, exec, export, diff, snapshots,
|
|
reset, services, PTY shells, secrets, network policy, and published ports
|
|
- stopped-workspace disk tools now exist, but remain explicitly secondary
|
|
|
|
## What "Trivial In Chat" Means
|
|
|
|
The roadmap is done only when a chat-driven LLM can cover the main use cases
|
|
without awkward shell choreography or hidden host-side glue:
|
|
|
|
- cold-start repo validation
|
|
- repro plus fix loops
|
|
- parallel isolated workspaces for multiple issues or PRs
|
|
- unsafe or untrusted code inspection
|
|
- review and evaluation workflows
|
|
|
|
More concretely, the model should not need to:
|
|
|
|
- patch files through shell-escaped `printf` or heredoc tricks
|
|
- rely on opaque workspace IDs without a discovery surface
|
|
- consume raw terminal control sequences as normal shell output
|
|
- choose from an unnecessarily large tool surface when a smaller profile would
|
|
work
|
|
|
|
## Locked Decisions
|
|
|
|
- keep the workspace product identity central; do not drift toward CI, queue,
|
|
or runner abstractions
|
|
- keep disk tools secondary and do not make them the main chat-facing surface
|
|
- prefer narrow tool profiles and structured outputs over more raw shell calls
|
|
- every milestone below must update CLI, SDK, and MCP together
|
|
- every milestone below must also update docs, help text, runnable examples,
|
|
and at least one real smoke scenario
|
|
|
|
## Milestones
|
|
|
|
1. [`3.2.0` Model-Native Workspace File Ops](llm-chat-ergonomics/3.2.0-model-native-workspace-file-ops.md)
|
|
2. [`3.3.0` Workspace Naming And Discovery](llm-chat-ergonomics/3.3.0-workspace-naming-and-discovery.md)
|
|
3. [`3.4.0` Tool Profiles And Canonical Chat Flows](llm-chat-ergonomics/3.4.0-tool-profiles-and-canonical-chat-flows.md)
|
|
4. [`3.5.0` Chat-Friendly Shell Output](llm-chat-ergonomics/3.5.0-chat-friendly-shell-output.md)
|
|
5. [`3.6.0` Use-Case Recipes And Smoke Packs](llm-chat-ergonomics/3.6.0-use-case-recipes-and-smoke-packs.md)
|
|
|
|
## Expected Outcome
|
|
|
|
After this roadmap, the product should still look like an agent workspace, not
|
|
like a CI runner with more isolation.
|
|
|
|
The intended model-facing shape is:
|
|
|
|
- one-shot work starts with `vm_run`
|
|
- persistent work moves to a small workspace-first contract
|
|
- file edits are structured and model-native
|
|
- workspace discovery is human and model-friendly
|
|
- shells are readable in chat
|
|
- the five core use cases are documented and smoke-tested end to end
|