Document the post-3.1 milestones needed to make the stable workspace product feel natural in chat-driven LLM interfaces. Add a follow-on roadmap for model-native file ops, workspace naming and discovery, tool profiles, shell output cleanup, and use-case recipes with smoke coverage. Link it from the README, vision doc, and completed workspace GA roadmap so the next phase is explicit. Keep the sequence anchored to the workspace-first vision and continue to treat disk tools as secondary rather than the main chat-facing surface.
1.4 KiB
1.4 KiB
3.5.0 Chat-Friendly Shell Output
Status: Planned
Goal
Keep persistent PTY shells powerful, but make their output clean enough to feed directly back into a chat model.
Public API Changes
Planned additions:
pyro workspace shell read ... --plainpyro workspace shell read ... --wait-for-idle-ms N- matching Python SDK parameters:
plain=Truewait_for_idle_ms=...
- matching MCP request fields on
shell_read
Implementation Boundaries
- keep raw PTY reads available for advanced clients
- plain mode should strip terminal control sequences and normalize line endings
- idle waiting should batch the next useful chunk of output without turning the shell into a separate job scheduler
- keep cursor-based reads so polling clients stay deterministic
Non-Goals
- no replacement of the PTY shell with a fake line-based shell
- no automatic command synthesis inside shell reads
- no shell-only workflow that replaces
workspace exec, services, or file ops
Acceptance Scenarios
- a chat agent can open a shell, write a command, and read back plain text output without ANSI noise
- long-running interactive setup or debugging flows are readable in chat
- shell output is useful as model input without extra client-side cleanup
Required Repo Updates
- help text that makes raw versus plain shell reads explicit
- examples that show a clean interactive shell loop
- smoke coverage for at least one shell-driven debugging scenario