pyro-mcp/docs/roadmap/llm-chat-ergonomics/3.5.0-chat-friendly-shell-output.md
Thales Maciel 21a88312b6 Add chat-friendly shell read rendering
Make workspace shell reads usable as direct chat-model input without changing the PTY or cursor model. This adds optional plain rendering and idle-window batching across CLI, SDK, and MCP while keeping raw reads backward-compatible.

Implement the rendering and wait-for-idle logic in the manager layer so the existing guest/backend shell transport stays unchanged. The new helper strips ANSI and other terminal control noise, handles carriage-return overwrite and backspace, and preserves raw cursor semantics even when plain output is requested.

Refresh the stable shell docs/examples to recommend --plain --wait-for-idle-ms 300, mark the 3.5.0 roadmap milestone done, and bump the package/catalog version to 3.5.0.

Validation: uv lock; UV_CACHE_DIR=.uv-cache make check; UV_CACHE_DIR=.uv-cache make dist-check; real guest-backed Firecracker smoke covering shell open/write/read with ANSI plus delayed output.
2026-03-13 01:10:26 -03:00

46 lines
1.4 KiB
Markdown

# `3.5.0` Chat-Friendly Shell Output
Status: Done
## Goal
Keep persistent PTY shells powerful, but make their output clean enough to feed
directly back into a chat model.
## Public API Changes
Planned additions:
- `pyro workspace shell read ... --plain`
- `pyro workspace shell read ... --wait-for-idle-ms N`
- matching Python SDK parameters:
- `plain=True`
- `wait_for_idle_ms=...`
- matching MCP request fields on `shell_read`
## Implementation Boundaries
- keep raw PTY reads available for advanced clients
- plain mode should strip terminal control sequences and normalize line endings
- idle waiting should batch the next useful chunk of output without turning the
shell into a separate job scheduler
- keep cursor-based reads so polling clients stay deterministic
## Non-Goals
- no replacement of the PTY shell with a fake line-based shell
- no automatic command synthesis inside shell reads
- no shell-only workflow that replaces `workspace exec`, services, or file ops
## Acceptance Scenarios
- a chat agent can open a shell, write a command, and read back plain text
output without ANSI noise
- long-running interactive setup or debugging flows are readable in chat
- shell output is useful as model input without extra client-side cleanup
## Required Repo Updates
- help text that makes raw versus plain shell reads explicit
- examples that show a clean interactive shell loop
- smoke coverage for at least one shell-driven debugging scenario