Add workspace network policy and published ports
Replace the workspace-level boolean network toggle with explicit network policies and attach localhost TCP publication to workspace services. Persist network_policy in workspace records, validate --publish requests, and run host-side proxy helpers that follow the service lifecycle so published ports are cleaned up on failure, stop, reset, and delete. Update the CLI, SDK, MCP contract, docs, roadmap, and examples for the new policy model, add coverage for the proxy and manager edge cases, and validate with uv lock, UV_CACHE_DIR=.uv-cache make check, UV_CACHE_DIR=.uv-cache make dist-check, and a real guest-backed published-port probe smoke.
This commit is contained in:
parent
fc72fcd3a1
commit
c82f4629b2
21 changed files with 1944 additions and 49 deletions
|
|
@ -2,6 +2,15 @@
|
|||
|
||||
All notable user-visible changes to `pyro-mcp` are documented here.
|
||||
|
||||
## 2.10.0
|
||||
|
||||
- Replaced the workspace-level boolean network toggle with explicit workspace network policies:
|
||||
`off`, `egress`, and `egress+published-ports`.
|
||||
- Added localhost-only published TCP ports for workspace services across the CLI, Python SDK, and
|
||||
MCP server, including returned host/guest port metadata on service start, list, and status.
|
||||
- Kept published ports attached to services rather than `/workspace` itself, so host probing works
|
||||
without changing workspace diff, export, shell, or reset semantics.
|
||||
|
||||
## 2.9.0
|
||||
|
||||
- Added explicit workspace secrets across the CLI, Python SDK, and MCP server with
|
||||
|
|
|
|||
20
README.md
20
README.md
|
|
@ -20,7 +20,7 @@ It exposes the same runtime in three public forms:
|
|||
- First run transcript: [docs/first-run.md](docs/first-run.md)
|
||||
- Terminal walkthrough GIF: [docs/assets/first-run.gif](docs/assets/first-run.gif)
|
||||
- PyPI package: [pypi.org/project/pyro-mcp](https://pypi.org/project/pyro-mcp/)
|
||||
- What's new in 2.9.0: [CHANGELOG.md#290](CHANGELOG.md#290)
|
||||
- What's new in 2.10.0: [CHANGELOG.md#2100](CHANGELOG.md#2100)
|
||||
- Host requirements: [docs/host-requirements.md](docs/host-requirements.md)
|
||||
- Integration targets: [docs/integrations.md](docs/integrations.md)
|
||||
- Public contract: [docs/public-contract.md](docs/public-contract.md)
|
||||
|
|
@ -57,7 +57,7 @@ What success looks like:
|
|||
```bash
|
||||
Platform: linux-x86_64
|
||||
Runtime: PASS
|
||||
Catalog version: 2.9.0
|
||||
Catalog version: 2.10.0
|
||||
...
|
||||
[pull] phase=install environment=debian:12
|
||||
[pull] phase=ready environment=debian:12
|
||||
|
|
@ -78,6 +78,7 @@ After the quickstart works:
|
|||
- prove the full one-shot lifecycle with `uvx --from pyro-mcp pyro demo`
|
||||
- create a persistent workspace with `uvx --from pyro-mcp pyro workspace create debian:12 --seed-path ./repo`
|
||||
- update a live workspace from the host with `uvx --from pyro-mcp pyro workspace sync push WORKSPACE_ID ./changes`
|
||||
- enable outbound guest networking for one workspace with `uvx --from pyro-mcp pyro workspace create debian:12 --network-policy egress`
|
||||
- add literal or file-backed secrets with `uvx --from pyro-mcp pyro workspace create debian:12 --secret API_TOKEN=expected --secret-file PIP_TOKEN=./token.txt`
|
||||
- map one persisted secret into one exec, shell, or service call with `--secret-env API_TOKEN`
|
||||
- diff the live workspace against its create-time baseline with `uvx --from pyro-mcp pyro workspace diff WORKSPACE_ID`
|
||||
|
|
@ -86,6 +87,7 @@ After the quickstart works:
|
|||
- export a changed file or directory with `uvx --from pyro-mcp pyro workspace export WORKSPACE_ID note.txt --output ./note.txt`
|
||||
- open a persistent interactive shell with `uvx --from pyro-mcp pyro workspace shell open WORKSPACE_ID`
|
||||
- start long-running workspace services with `uvx --from pyro-mcp pyro workspace service start WORKSPACE_ID app --ready-file .ready -- sh -lc 'touch .ready && while true; do sleep 60; done'`
|
||||
- publish one guest service port to the host with `uvx --from pyro-mcp pyro workspace create debian:12 --network-policy egress+published-ports` and `uvx --from pyro-mcp pyro workspace service start WORKSPACE_ID app --ready-http http://127.0.0.1:8080/ --publish 18080:8080 -- ./start-app`
|
||||
- move to Python or MCP via [docs/integrations.md](docs/integrations.md)
|
||||
|
||||
## Supported Hosts
|
||||
|
|
@ -139,7 +141,7 @@ uvx --from pyro-mcp pyro env list
|
|||
Expected output:
|
||||
|
||||
```bash
|
||||
Catalog version: 2.9.0
|
||||
Catalog version: 2.10.0
|
||||
debian:12 [installed|not installed] Debian 12 environment with Git preinstalled for common agent workflows.
|
||||
debian:12-base [installed|not installed] Minimal Debian 12 environment for shell and core Unix tooling.
|
||||
debian:12-build [installed|not installed] Debian 12 environment with Git and common build tools preinstalled.
|
||||
|
|
@ -215,7 +217,9 @@ longer-term interaction model.
|
|||
|
||||
```bash
|
||||
pyro workspace create debian:12 --seed-path ./repo
|
||||
pyro workspace create debian:12 --network-policy egress
|
||||
pyro workspace create debian:12 --seed-path ./repo --secret API_TOKEN=expected
|
||||
pyro workspace create debian:12 --network-policy egress+published-ports
|
||||
pyro workspace sync push WORKSPACE_ID ./changes --dest src
|
||||
pyro workspace exec WORKSPACE_ID -- cat src/note.txt
|
||||
pyro workspace exec WORKSPACE_ID --secret-env API_TOKEN -- sh -lc 'test "$API_TOKEN" = "expected"'
|
||||
|
|
@ -230,6 +234,7 @@ pyro workspace shell read WORKSPACE_ID SHELL_ID
|
|||
pyro workspace shell close WORKSPACE_ID SHELL_ID
|
||||
pyro workspace service start WORKSPACE_ID web --secret-env API_TOKEN --ready-file .web-ready -- sh -lc 'touch .web-ready && while true; do sleep 60; done'
|
||||
pyro workspace service start WORKSPACE_ID worker --ready-file .worker-ready -- sh -lc 'touch .worker-ready && while true; do sleep 60; done'
|
||||
pyro workspace service start WORKSPACE_ID app --ready-http http://127.0.0.1:8080/ --publish 18080:8080 -- ./start-app
|
||||
pyro workspace service list WORKSPACE_ID
|
||||
pyro workspace service status WORKSPACE_ID web
|
||||
pyro workspace service logs WORKSPACE_ID web --tail-lines 50
|
||||
|
|
@ -243,7 +248,7 @@ Persistent workspaces start in `/workspace` and keep command history until you d
|
|||
machine consumption, add `--json` and read the returned `workspace_id`. Use `--seed-path` when
|
||||
you want the workspace to start from a host directory or a local `.tar` / `.tar.gz` / `.tgz`
|
||||
archive instead of an empty workspace. Use `pyro workspace sync push` when you want to import
|
||||
later host-side changes into a started workspace. Sync is non-atomic in `2.9.0`; if it fails
|
||||
later host-side changes into a started workspace. Sync is non-atomic in `2.10.0`; if it fails
|
||||
partway through, prefer `pyro workspace reset` to recover from `baseline` or one named snapshot.
|
||||
Use `pyro workspace diff` to compare the live `/workspace` tree to its immutable create-time
|
||||
baseline, and `pyro workspace export` to copy one changed file or directory back to the host. Use
|
||||
|
|
@ -255,6 +260,9 @@ persistent PTY session that keeps interactive shell state between calls. Use
|
|||
Typed readiness checks prefer `--ready-file`, `--ready-tcp`, or `--ready-http`; keep
|
||||
`--ready-command` as the escape hatch. Service metadata and logs live outside `/workspace`, so the
|
||||
internal service state does not appear in `pyro workspace diff` or `pyro workspace export`.
|
||||
Use `--network-policy egress` when the workspace needs outbound guest networking, and
|
||||
`--network-policy egress+published-ports` plus `workspace service start --publish` when one
|
||||
service must be probed from the host on `127.0.0.1`.
|
||||
Use `--secret` and `--secret-file` at workspace creation when the sandbox needs private tokens or
|
||||
config. Persisted secrets are materialized inside the guest at `/run/pyro-secrets/<name>`, and
|
||||
`--secret-env SECRET_NAME[=ENV_VAR]` maps one secret into one exec, shell, or service call without
|
||||
|
|
@ -430,7 +438,7 @@ Advanced lifecycle tools:
|
|||
|
||||
Persistent workspace tools:
|
||||
|
||||
- `workspace_create(environment, vcpu_count=1, mem_mib=1024, ttl_seconds=600, network=false, allow_host_compat=false, seed_path=null, secrets=null)`
|
||||
- `workspace_create(environment, vcpu_count=1, mem_mib=1024, ttl_seconds=600, network_policy="off", allow_host_compat=false, seed_path=null, secrets=null)`
|
||||
- `workspace_sync_push(workspace_id, source_path, dest="/workspace")`
|
||||
- `workspace_exec(workspace_id, command, timeout_seconds=30, secret_env=null)`
|
||||
- `workspace_export(workspace_id, path, output_path)`
|
||||
|
|
@ -439,7 +447,7 @@ Persistent workspace tools:
|
|||
- `snapshot_list(workspace_id)`
|
||||
- `snapshot_delete(workspace_id, snapshot_name)`
|
||||
- `workspace_reset(workspace_id, snapshot="baseline")`
|
||||
- `service_start(workspace_id, service_name, command, cwd="/workspace", readiness=null, ready_timeout_seconds=30, ready_interval_ms=500, secret_env=null)`
|
||||
- `service_start(workspace_id, service_name, command, cwd="/workspace", readiness=null, ready_timeout_seconds=30, ready_interval_ms=500, secret_env=null, published_ports=null)`
|
||||
- `service_list(workspace_id)`
|
||||
- `service_status(workspace_id, service_name)`
|
||||
- `service_logs(workspace_id, service_name, tail_lines=200)`
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ Networking: tun=yes ip_forward=yes
|
|||
|
||||
```bash
|
||||
$ uvx --from pyro-mcp pyro env list
|
||||
Catalog version: 2.9.0
|
||||
Catalog version: 2.10.0
|
||||
debian:12 [installed|not installed] Debian 12 environment with Git preinstalled for common agent workflows.
|
||||
debian:12-base [installed|not installed] Minimal Debian 12 environment for shell and core Unix tooling.
|
||||
debian:12-build [installed|not installed] Debian 12 environment with Git and common build tools preinstalled.
|
||||
|
|
@ -72,6 +72,7 @@ deterministic structured result.
|
|||
$ uvx --from pyro-mcp pyro demo
|
||||
$ uvx --from pyro-mcp pyro workspace create debian:12 --seed-path ./repo
|
||||
$ uvx --from pyro-mcp pyro workspace sync push WORKSPACE_ID ./changes
|
||||
$ uvx --from pyro-mcp pyro workspace create debian:12 --network-policy egress
|
||||
$ uvx --from pyro-mcp pyro workspace create debian:12 --secret API_TOKEN=expected --secret-file PIP_TOKEN=./token.txt
|
||||
$ uvx --from pyro-mcp pyro workspace exec WORKSPACE_ID --secret-env API_TOKEN -- sh -lc 'test "$API_TOKEN" = "expected"'
|
||||
$ uvx --from pyro-mcp pyro workspace diff WORKSPACE_ID
|
||||
|
|
@ -80,6 +81,8 @@ $ uvx --from pyro-mcp pyro workspace reset WORKSPACE_ID --snapshot checkpoint
|
|||
$ uvx --from pyro-mcp pyro workspace export WORKSPACE_ID note.txt --output ./note.txt
|
||||
$ uvx --from pyro-mcp pyro workspace shell open WORKSPACE_ID --secret-env API_TOKEN
|
||||
$ uvx --from pyro-mcp pyro workspace service start WORKSPACE_ID app --secret-env API_TOKEN --ready-file .ready -- sh -lc 'touch .ready && while true; do sleep 60; done'
|
||||
$ uvx --from pyro-mcp pyro workspace create debian:12 --network-policy egress+published-ports
|
||||
$ uvx --from pyro-mcp pyro workspace service start WORKSPACE_ID app --ready-http http://127.0.0.1:8080/ --publish 18080:8080 -- ./start-app
|
||||
$ uvx --from pyro-mcp pyro mcp serve
|
||||
```
|
||||
|
||||
|
|
@ -94,6 +97,7 @@ Environment: debian:12
|
|||
State: started
|
||||
Workspace: /workspace
|
||||
Workspace seed: directory from ...
|
||||
Network policy: off
|
||||
Execution mode: guest_vsock
|
||||
Resources: 1 vCPU / 1024 MiB
|
||||
Command count: 0
|
||||
|
|
@ -144,6 +148,14 @@ $ uvx --from pyro-mcp pyro workspace service start WORKSPACE_ID web --secret-env
|
|||
$ uvx --from pyro-mcp pyro workspace service start WORKSPACE_ID worker --ready-file .worker-ready -- sh -lc 'touch .worker-ready && while true; do sleep 60; done'
|
||||
[workspace-service-start] workspace_id=... service=worker state=running cwd=/workspace ready_type=file execution_mode=guest_vsock
|
||||
|
||||
$ uvx --from pyro-mcp pyro workspace create debian:12 --network-policy egress+published-ports
|
||||
Workspace ID: ...
|
||||
Network policy: egress+published-ports
|
||||
...
|
||||
|
||||
$ uvx --from pyro-mcp pyro workspace service start WORKSPACE_ID app --ready-http http://127.0.0.1:8080/ --publish 18080:8080 -- ./start-app
|
||||
[workspace-service-start] workspace_id=... service=app state=running cwd=/workspace ready_type=http execution_mode=guest_vsock published=127.0.0.1:18080->8080/tcp
|
||||
|
||||
$ uvx --from pyro-mcp pyro workspace service list WORKSPACE_ID
|
||||
Workspace: ...
|
||||
Services: 2 total, 2 running
|
||||
|
|
@ -175,7 +187,7 @@ $ uvx --from pyro-mcp pyro workspace service stop WORKSPACE_ID worker
|
|||
Use `--seed-path` when the workspace should start from a host directory or a local
|
||||
`.tar` / `.tar.gz` / `.tgz` archive instead of an empty `/workspace`. Use
|
||||
`pyro workspace sync push` when you need to import later host-side changes into a started
|
||||
workspace. Sync is non-atomic in `2.9.0`; if it fails partway through, prefer `pyro workspace reset`
|
||||
workspace. Sync is non-atomic in `2.10.0`; if it fails partway through, prefer `pyro workspace reset`
|
||||
to recover from `baseline` or one named snapshot. Use `pyro workspace diff` to compare the current
|
||||
`/workspace` tree to its immutable create-time baseline, `pyro workspace snapshot *` to create
|
||||
named checkpoints, and `pyro workspace export` to copy one changed file or directory back to the
|
||||
|
|
@ -183,10 +195,13 @@ host. Use `pyro workspace exec` for one-shot commands and `pyro workspace shell
|
|||
need a persistent interactive PTY session in that same workspace. Use `pyro workspace service *`
|
||||
when the workspace needs long-running background processes with typed readiness checks. Internal
|
||||
service state and logs stay outside `/workspace`, so service runtime data does not appear in
|
||||
workspace diff or export results. Use `--secret` and `--secret-file` at workspace creation when
|
||||
the sandbox needs private tokens or config. Persisted secret files are materialized at
|
||||
`/run/pyro-secrets/<name>`, and `--secret-env SECRET_NAME[=ENV_VAR]` maps one secret into one
|
||||
exec, shell, or service call without storing that environment mapping on the workspace itself.
|
||||
workspace diff or export results. Use `--network-policy egress` for outbound guest networking, and
|
||||
`--network-policy egress+published-ports` plus `workspace service start --publish` when one
|
||||
service must be reachable from the host on `127.0.0.1`. Use `--secret` and `--secret-file` at
|
||||
workspace creation when the sandbox needs private tokens or config. Persisted secret files are
|
||||
materialized at `/run/pyro-secrets/<name>`, and `--secret-env SECRET_NAME[=ENV_VAR]` maps one
|
||||
secret into one exec, shell, or service call without storing that environment mapping on the
|
||||
workspace itself.
|
||||
|
||||
Example output:
|
||||
|
||||
|
|
|
|||
|
|
@ -83,7 +83,7 @@ uvx --from pyro-mcp pyro env list
|
|||
Expected output:
|
||||
|
||||
```bash
|
||||
Catalog version: 2.9.0
|
||||
Catalog version: 2.10.0
|
||||
debian:12 [installed|not installed] Debian 12 environment with Git preinstalled for common agent workflows.
|
||||
debian:12-base [installed|not installed] Minimal Debian 12 environment for shell and core Unix tooling.
|
||||
debian:12-build [installed|not installed] Debian 12 environment with Git and common build tools preinstalled.
|
||||
|
|
@ -176,12 +176,14 @@ After the CLI path works, you can move on to:
|
|||
|
||||
- persistent workspaces: `pyro workspace create debian:12 --seed-path ./repo`
|
||||
- live workspace updates: `pyro workspace sync push WORKSPACE_ID ./changes`
|
||||
- guest networking policy: `pyro workspace create debian:12 --network-policy egress`
|
||||
- workspace secrets: `pyro workspace create debian:12 --secret API_TOKEN=expected --secret-file PIP_TOKEN=./token.txt`
|
||||
- baseline diff: `pyro workspace diff WORKSPACE_ID`
|
||||
- snapshots and reset: `pyro workspace snapshot create WORKSPACE_ID checkpoint` and `pyro workspace reset WORKSPACE_ID --snapshot checkpoint`
|
||||
- host export: `pyro workspace export WORKSPACE_ID note.txt --output ./note.txt`
|
||||
- interactive shells: `pyro workspace shell open WORKSPACE_ID`
|
||||
- long-running services: `pyro workspace service start WORKSPACE_ID app --ready-file .ready -- sh -lc 'touch .ready && while true; do sleep 60; done'`
|
||||
- localhost-published ports: `pyro workspace create debian:12 --network-policy egress+published-ports` and `pyro workspace service start WORKSPACE_ID app --ready-http http://127.0.0.1:8080/ --publish 18080:8080 -- ./start-app`
|
||||
- MCP: `pyro mcp serve`
|
||||
- Python SDK: `from pyro_mcp import Pyro`
|
||||
- Demos: `pyro demo` or `pyro demo --network`
|
||||
|
|
@ -192,7 +194,9 @@ Use `pyro workspace ...` when you need repeated commands in one sandbox instead
|
|||
|
||||
```bash
|
||||
pyro workspace create debian:12 --seed-path ./repo
|
||||
pyro workspace create debian:12 --network-policy egress
|
||||
pyro workspace create debian:12 --seed-path ./repo --secret API_TOKEN=expected
|
||||
pyro workspace create debian:12 --network-policy egress+published-ports
|
||||
pyro workspace sync push WORKSPACE_ID ./changes --dest src
|
||||
pyro workspace exec WORKSPACE_ID -- cat src/note.txt
|
||||
pyro workspace exec WORKSPACE_ID --secret-env API_TOKEN -- sh -lc 'test "$API_TOKEN" = "expected"'
|
||||
|
|
@ -207,6 +211,7 @@ pyro workspace shell read WORKSPACE_ID SHELL_ID
|
|||
pyro workspace shell close WORKSPACE_ID SHELL_ID
|
||||
pyro workspace service start WORKSPACE_ID web --secret-env API_TOKEN --ready-file .web-ready -- sh -lc 'touch .web-ready && while true; do sleep 60; done'
|
||||
pyro workspace service start WORKSPACE_ID worker --ready-file .worker-ready -- sh -lc 'touch .worker-ready && while true; do sleep 60; done'
|
||||
pyro workspace service start WORKSPACE_ID app --ready-http http://127.0.0.1:8080/ --publish 18080:8080 -- ./start-app
|
||||
pyro workspace service list WORKSPACE_ID
|
||||
pyro workspace service status WORKSPACE_ID web
|
||||
pyro workspace service logs WORKSPACE_ID web --tail-lines 50
|
||||
|
|
@ -220,7 +225,7 @@ Workspace commands default to the persistent `/workspace` directory inside the g
|
|||
the identifier programmatically, use `--json` and read the `workspace_id` field. Use `--seed-path`
|
||||
when the workspace should start from a host directory or a local `.tar` / `.tar.gz` / `.tgz`
|
||||
archive. Use `pyro workspace sync push` for later host-side changes to a started workspace. Sync
|
||||
is non-atomic in `2.9.0`; if it fails partway through, prefer `pyro workspace reset` to recover
|
||||
is non-atomic in `2.10.0`; if it fails partway through, prefer `pyro workspace reset` to recover
|
||||
from `baseline` or one named snapshot. Use `pyro workspace diff` to compare the current workspace
|
||||
tree to its immutable create-time baseline, `pyro workspace snapshot *` to capture named
|
||||
checkpoints, and `pyro workspace export` to copy one changed file or directory back to the host. Use
|
||||
|
|
@ -228,10 +233,13 @@ checkpoints, and `pyro workspace export` to copy one changed file or directory b
|
|||
interactive PTY that survives across separate calls. Use `pyro workspace service *` when the
|
||||
workspace needs long-running background processes with typed readiness probes. Service metadata and
|
||||
logs stay outside `/workspace`, so the service runtime itself does not show up in workspace diff or
|
||||
export results. Use `--secret` and `--secret-file` at workspace creation when the sandbox needs
|
||||
private tokens or config, and `--secret-env SECRET_NAME[=ENV_VAR]` when one exec, shell, or
|
||||
service call needs that secret as an environment variable. Persisted secret files are available in
|
||||
the guest at `/run/pyro-secrets/<name>`.
|
||||
export results. Use `--network-policy egress` when the workspace needs outbound guest networking,
|
||||
and `--network-policy egress+published-ports` plus `workspace service start --publish` when one
|
||||
service must be reachable from the host on `127.0.0.1`. Use `--secret` and `--secret-file` at
|
||||
workspace creation when the sandbox needs private tokens or config, and
|
||||
`--secret-env SECRET_NAME[=ENV_VAR]` when one exec, shell, or service call needs that secret as an
|
||||
environment variable. Persisted secret files are available in the guest at
|
||||
`/run/pyro-secrets/<name>`.
|
||||
|
||||
## Contributor Clone
|
||||
|
||||
|
|
|
|||
|
|
@ -32,6 +32,7 @@ Recommended surface:
|
|||
- `vm_run`
|
||||
- `workspace_create(seed_path=...)` + `workspace_sync_push` + `workspace_exec` when the agent needs persistent workspace state
|
||||
- `workspace_create(..., secrets=...)` + `workspace_exec(..., secret_env=...)` when the workspace needs private tokens or authenticated setup
|
||||
- `workspace_create(..., network_policy="egress+published-ports")` + `start_service(..., published_ports=[...])` when the host must probe one workspace service
|
||||
- `workspace_diff` + `workspace_export` when the agent needs explicit baseline comparison or host-out file transfer
|
||||
- `start_service` / `list_services` / `status_service` / `logs_service` / `stop_service` when the agent needs long-running processes inside that workspace
|
||||
- `open_shell(..., secret_env=...)` / `read_shell` / `write_shell` when the agent needs an interactive PTY inside that workspace
|
||||
|
|
@ -71,6 +72,7 @@ Recommended default:
|
|||
- `Pyro.run_in_vm(...)`
|
||||
- `Pyro.create_workspace(seed_path=...)` + `Pyro.push_workspace_sync(...)` + `Pyro.exec_workspace(...)` when repeated workspace commands are required
|
||||
- `Pyro.create_workspace(..., secrets=...)` + `Pyro.exec_workspace(..., secret_env=...)` when the workspace needs private tokens or authenticated setup
|
||||
- `Pyro.create_workspace(..., network_policy="egress+published-ports")` + `Pyro.start_service(..., published_ports=[...])` when the host must probe one workspace service
|
||||
- `Pyro.diff_workspace(...)` + `Pyro.export_workspace(...)` when the agent needs baseline comparison or host-out file transfer
|
||||
- `Pyro.start_service(..., secret_env=...)` + `Pyro.list_services(...)` + `Pyro.logs_service(...)` when the agent needs long-running background processes in one workspace
|
||||
- `Pyro.open_shell(..., secret_env=...)` + `Pyro.write_shell(...)` + `Pyro.read_shell(...)` when the agent needs an interactive PTY inside the workspace
|
||||
|
|
@ -86,6 +88,9 @@ Lifecycle note:
|
|||
running workspace without recreating it
|
||||
- use `create_workspace(..., secrets=...)` plus `secret_env` on exec, shell, or service start when
|
||||
the agent needs private tokens or authenticated startup inside that workspace
|
||||
- use `create_workspace(..., network_policy="egress+published-ports")` plus
|
||||
`start_service(..., published_ports=[...])` when the host must probe one service from that
|
||||
workspace
|
||||
- use `diff_workspace(...)` when the agent needs a structured comparison against the immutable
|
||||
create-time baseline
|
||||
- use `export_workspace(...)` when the agent needs one file or directory copied back to the host
|
||||
|
|
|
|||
|
|
@ -64,6 +64,7 @@ Behavioral guarantees:
|
|||
- `pyro demo ollama` prints log lines plus a final summary line.
|
||||
- `pyro workspace create` auto-starts a persistent workspace.
|
||||
- `pyro workspace create --seed-path PATH` seeds `/workspace` from a host directory or a local `.tar` / `.tar.gz` / `.tgz` archive before the workspace is returned.
|
||||
- `pyro workspace create --network-policy {off,egress,egress+published-ports}` controls workspace guest networking and whether services may publish localhost ports.
|
||||
- `pyro workspace create --secret NAME=VALUE` and `--secret-file NAME=PATH` persist guest-only UTF-8 secrets outside `/workspace`.
|
||||
- `pyro workspace sync push WORKSPACE_ID SOURCE_PATH [--dest WORKSPACE_PATH]` imports later host-side directory or archive content into a started workspace.
|
||||
- `pyro workspace export WORKSPACE_ID PATH --output HOST_PATH` exports one file or directory from `/workspace` back to the host.
|
||||
|
|
@ -71,6 +72,7 @@ Behavioral guarantees:
|
|||
- `pyro workspace snapshot *` manages explicit named snapshots in addition to the implicit `baseline`.
|
||||
- `pyro workspace reset WORKSPACE_ID [--snapshot SNAPSHOT_NAME|baseline]` recreates the full sandbox and restores `/workspace` from the chosen snapshot.
|
||||
- `pyro workspace service *` manages long-running named services inside one started workspace with typed readiness probes.
|
||||
- `pyro workspace service start --publish GUEST_PORT` or `--publish HOST_PORT:GUEST_PORT` publishes one guest TCP port to `127.0.0.1` on the host.
|
||||
- `pyro workspace exec --secret-env SECRET_NAME[=ENV_VAR]` maps one persisted secret into one exec call.
|
||||
- `pyro workspace service start --secret-env SECRET_NAME[=ENV_VAR]` maps one persisted secret into one service start call.
|
||||
- `pyro workspace exec` runs in the persistent `/workspace` for that workspace and does not auto-clean.
|
||||
|
|
@ -78,9 +80,11 @@ Behavioral guarantees:
|
|||
- `pyro workspace shell *` manages persistent PTY sessions inside a started workspace.
|
||||
- `pyro workspace logs` returns persisted command history for that workspace until `pyro workspace delete`.
|
||||
- Workspace create/status results expose `workspace_seed` metadata describing how `/workspace` was initialized.
|
||||
- Workspace create/status/reset results expose `network_policy`.
|
||||
- Workspace create/status/reset results expose `reset_count` and `last_reset_at`.
|
||||
- Workspace create/status/reset results expose safe `secrets` metadata with each secret name and source kind, but never the secret values.
|
||||
- `pyro workspace status` includes aggregate `service_count` and `running_service_count` fields.
|
||||
- `pyro workspace service start`, `pyro workspace service list`, and `pyro workspace service status` expose published-port metadata when present.
|
||||
|
||||
## Python SDK Contract
|
||||
|
||||
|
|
@ -97,7 +101,7 @@ Supported public entrypoints:
|
|||
- `Pyro.inspect_environment(environment)`
|
||||
- `Pyro.prune_environments()`
|
||||
- `Pyro.create_vm(...)`
|
||||
- `Pyro.create_workspace(..., secrets=None)`
|
||||
- `Pyro.create_workspace(..., network_policy="off", secrets=None)`
|
||||
- `Pyro.push_workspace_sync(workspace_id, source_path, *, dest="/workspace")`
|
||||
- `Pyro.export_workspace(workspace_id, path, *, output_path)`
|
||||
- `Pyro.diff_workspace(workspace_id)`
|
||||
|
|
@ -105,7 +109,7 @@ Supported public entrypoints:
|
|||
- `Pyro.list_snapshots(workspace_id)`
|
||||
- `Pyro.delete_snapshot(workspace_id, snapshot_name)`
|
||||
- `Pyro.reset_workspace(workspace_id, *, snapshot="baseline")`
|
||||
- `Pyro.start_service(workspace_id, service_name, *, command, cwd="/workspace", readiness=None, ready_timeout_seconds=30, ready_interval_ms=500, secret_env=None)`
|
||||
- `Pyro.start_service(workspace_id, service_name, *, command, cwd="/workspace", readiness=None, ready_timeout_seconds=30, ready_interval_ms=500, secret_env=None, published_ports=None)`
|
||||
- `Pyro.list_services(workspace_id)`
|
||||
- `Pyro.status_service(workspace_id, service_name)`
|
||||
- `Pyro.logs_service(workspace_id, service_name, *, tail_lines=200, all=False)`
|
||||
|
|
@ -136,7 +140,7 @@ Stable public method names:
|
|||
- `inspect_environment(environment)`
|
||||
- `prune_environments()`
|
||||
- `create_vm(...)`
|
||||
- `create_workspace(..., secrets=None)`
|
||||
- `create_workspace(..., network_policy="off", secrets=None)`
|
||||
- `push_workspace_sync(workspace_id, source_path, *, dest="/workspace")`
|
||||
- `export_workspace(workspace_id, path, *, output_path)`
|
||||
- `diff_workspace(workspace_id)`
|
||||
|
|
@ -144,7 +148,7 @@ Stable public method names:
|
|||
- `list_snapshots(workspace_id)`
|
||||
- `delete_snapshot(workspace_id, snapshot_name)`
|
||||
- `reset_workspace(workspace_id, *, snapshot="baseline")`
|
||||
- `start_service(workspace_id, service_name, *, command, cwd="/workspace", readiness=None, ready_timeout_seconds=30, ready_interval_ms=500, secret_env=None)`
|
||||
- `start_service(workspace_id, service_name, *, command, cwd="/workspace", readiness=None, ready_timeout_seconds=30, ready_interval_ms=500, secret_env=None, published_ports=None)`
|
||||
- `list_services(workspace_id)`
|
||||
- `status_service(workspace_id, service_name)`
|
||||
- `logs_service(workspace_id, service_name, *, tail_lines=200, all=False)`
|
||||
|
|
@ -174,6 +178,7 @@ Behavioral defaults:
|
|||
- `allow_host_compat` defaults to `False` on `create_vm(...)` and `run_in_vm(...)`.
|
||||
- `allow_host_compat` defaults to `False` on `create_workspace(...)`.
|
||||
- `Pyro.create_workspace(..., seed_path=...)` seeds `/workspace` from a host directory or a local `.tar` / `.tar.gz` / `.tgz` archive before the workspace is returned.
|
||||
- `Pyro.create_workspace(..., network_policy="off"|"egress"|"egress+published-ports")` controls workspace guest networking and whether services may publish host ports.
|
||||
- `Pyro.create_workspace(..., secrets=...)` persists guest-only UTF-8 secrets outside `/workspace`.
|
||||
- `Pyro.push_workspace_sync(...)` imports later host-side directory or archive content into a started workspace.
|
||||
- `Pyro.export_workspace(...)` exports one file or directory from `/workspace` to an explicit host path.
|
||||
|
|
@ -184,6 +189,7 @@ Behavioral defaults:
|
|||
- `Pyro.reset_workspace(...)` recreates the full sandbox from `baseline` or one named snapshot and clears command, shell, and service history.
|
||||
- `Pyro.start_service(..., secret_env=...)` maps persisted workspace secrets into that service process as environment variables for that start call only.
|
||||
- `Pyro.start_service(...)` starts one named long-running process in a started workspace and waits for its typed readiness probe when configured.
|
||||
- `Pyro.start_service(..., published_ports=[...])` publishes one or more guest TCP ports to `127.0.0.1` on the host when the workspace network policy is `egress+published-ports`.
|
||||
- `Pyro.list_services(...)`, `Pyro.status_service(...)`, `Pyro.logs_service(...)`, and `Pyro.stop_service(...)` manage those persisted workspace services.
|
||||
- `Pyro.exec_vm(...)` runs one command and auto-cleans that VM after the exec completes.
|
||||
- `Pyro.exec_workspace(..., secret_env=...)` maps persisted workspace secrets into that exec call as environment variables for that call only.
|
||||
|
|
@ -243,6 +249,7 @@ Behavioral defaults:
|
|||
- `vm_run` and `vm_create` expose `allow_host_compat`, which defaults to `false`.
|
||||
- `workspace_create` exposes `allow_host_compat`, which defaults to `false`.
|
||||
- `workspace_create` accepts optional `seed_path` and seeds `/workspace` from a host directory or a local `.tar` / `.tar.gz` / `.tgz` archive before the workspace is returned.
|
||||
- `workspace_create` accepts `network_policy` with `off`, `egress`, or `egress+published-ports` to control workspace guest networking and service port publication.
|
||||
- `workspace_create` accepts optional `secrets` and persists guest-only UTF-8 secret material outside `/workspace`.
|
||||
- `workspace_sync_push` imports later host-side directory or archive content into a started workspace, with an optional `dest` under `/workspace`.
|
||||
- `workspace_export` exports one file or directory from `/workspace` to an explicit host path.
|
||||
|
|
@ -250,6 +257,7 @@ Behavioral defaults:
|
|||
- `snapshot_create`, `snapshot_list`, and `snapshot_delete` manage explicit named snapshots in addition to the implicit `baseline`.
|
||||
- `workspace_reset` recreates the full sandbox and restores `/workspace` from `baseline` or one named snapshot.
|
||||
- `service_start`, `service_list`, `service_status`, `service_logs`, and `service_stop` manage persistent named services inside a started workspace.
|
||||
- `service_start` accepts optional `published_ports` to expose guest TCP ports on `127.0.0.1` when the workspace network policy is `egress+published-ports`.
|
||||
- `vm_exec` runs one command and auto-cleans that VM after the exec completes.
|
||||
- `workspace_exec` accepts optional `secret_env` mappings for one exec call and leaves the workspace alive.
|
||||
- `service_start` accepts optional `secret_env` mappings for one service start call.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
This roadmap turns the agent-workspace vision into release-sized milestones.
|
||||
|
||||
Current baseline is `2.9.0`:
|
||||
Current baseline is `2.10.0`:
|
||||
|
||||
- workspace persistence exists and the public surface is now workspace-first
|
||||
- host crossing currently covers create-time seeding, later sync push, and explicit export
|
||||
|
|
@ -11,7 +11,7 @@ Current baseline is `2.9.0`:
|
|||
- multi-service lifecycle exists with typed readiness and aggregate workspace status counts
|
||||
- named snapshots and full workspace reset now exist
|
||||
- explicit secrets now exist for guest-backed workspaces
|
||||
- no explicit host port publication contract exists yet
|
||||
- explicit workspace network policy and localhost published service ports now exist
|
||||
|
||||
Locked roadmap decisions:
|
||||
|
||||
|
|
@ -36,7 +36,7 @@ also expected to update:
|
|||
4. [`2.7.0` Service Lifecycle And Typed Readiness](task-workspace-ga/2.7.0-service-lifecycle-and-typed-readiness.md) - Done
|
||||
5. [`2.8.0` Named Snapshots And Reset](task-workspace-ga/2.8.0-named-snapshots-and-reset.md) - Done
|
||||
6. [`2.9.0` Secrets](task-workspace-ga/2.9.0-secrets.md) - Done
|
||||
7. [`2.10.0` Network Policy And Host Port Publication](task-workspace-ga/2.10.0-network-policy-and-host-port-publication.md)
|
||||
7. [`2.10.0` Network Policy And Host Port Publication](task-workspace-ga/2.10.0-network-policy-and-host-port-publication.md) - Done
|
||||
8. [`3.0.0` Stable Workspace Product](task-workspace-ga/3.0.0-stable-workspace-product.md)
|
||||
9. [`3.1.0` Secondary Disk Tools](task-workspace-ga/3.1.0-secondary-disk-tools.md)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
# `2.10.0` Network Policy And Host Port Publication
|
||||
|
||||
Status: Done
|
||||
|
||||
## Goal
|
||||
|
||||
Replace the coarse current network toggle with an explicit workspace network
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@ def main() -> None:
|
|||
created = pyro.create_workspace(
|
||||
environment="debian:12",
|
||||
seed_path=seed_dir,
|
||||
network_policy="egress+published-ports",
|
||||
secrets=[
|
||||
{"name": "API_TOKEN", "value": "expected"},
|
||||
{"name": "FILE_TOKEN", "file_path": str(secret_file)},
|
||||
|
|
@ -60,11 +61,13 @@ def main() -> None:
|
|||
command="touch .web-ready && while true; do sleep 60; done",
|
||||
readiness={"type": "file", "path": ".web-ready"},
|
||||
secret_env={"API_TOKEN": "API_TOKEN"},
|
||||
published_ports=[{"guest_port": 8080}],
|
||||
)
|
||||
services = pyro.list_services(workspace_id)
|
||||
print(f"services={services['count']} running={services['running_count']}")
|
||||
service_status = pyro.status_service(workspace_id, "web")
|
||||
print(f"service_state={service_status['state']} ready_at={service_status['ready_at']}")
|
||||
print(f"published_ports={service_status['published_ports']}")
|
||||
service_logs = pyro.logs_service(workspace_id, "web", tail_lines=20)
|
||||
print(f"service_stdout_len={len(service_logs['stdout'])}")
|
||||
pyro.stop_service(workspace_id, "web")
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[project]
|
||||
name = "pyro-mcp"
|
||||
version = "2.9.0"
|
||||
version = "2.10.0"
|
||||
description = "Ephemeral Firecracker sandboxes with curated environments, persistent workspaces, and MCP tools."
|
||||
readme = "README.md"
|
||||
license = { file = "LICENSE" }
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ from pyro_mcp.vm_manager import (
|
|||
DEFAULT_TIMEOUT_SECONDS,
|
||||
DEFAULT_TTL_SECONDS,
|
||||
DEFAULT_VCPU_COUNT,
|
||||
DEFAULT_WORKSPACE_NETWORK_POLICY,
|
||||
VmManager,
|
||||
)
|
||||
|
||||
|
|
@ -84,7 +85,7 @@ class Pyro:
|
|||
vcpu_count: int = DEFAULT_VCPU_COUNT,
|
||||
mem_mib: int = DEFAULT_MEM_MIB,
|
||||
ttl_seconds: int = DEFAULT_TTL_SECONDS,
|
||||
network: bool = False,
|
||||
network_policy: str = DEFAULT_WORKSPACE_NETWORK_POLICY,
|
||||
allow_host_compat: bool = DEFAULT_ALLOW_HOST_COMPAT,
|
||||
seed_path: str | Path | None = None,
|
||||
secrets: list[dict[str, str]] | None = None,
|
||||
|
|
@ -94,7 +95,7 @@ class Pyro:
|
|||
vcpu_count=vcpu_count,
|
||||
mem_mib=mem_mib,
|
||||
ttl_seconds=ttl_seconds,
|
||||
network=network,
|
||||
network_policy=network_policy,
|
||||
allow_host_compat=allow_host_compat,
|
||||
seed_path=seed_path,
|
||||
secrets=secrets,
|
||||
|
|
@ -241,6 +242,7 @@ class Pyro:
|
|||
ready_timeout_seconds: int = 30,
|
||||
ready_interval_ms: int = 500,
|
||||
secret_env: dict[str, str] | None = None,
|
||||
published_ports: list[dict[str, int | None]] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
return self._manager.start_service(
|
||||
workspace_id,
|
||||
|
|
@ -251,6 +253,7 @@ class Pyro:
|
|||
ready_timeout_seconds=ready_timeout_seconds,
|
||||
ready_interval_ms=ready_interval_ms,
|
||||
secret_env=secret_env,
|
||||
published_ports=published_ports,
|
||||
)
|
||||
|
||||
def list_services(self, workspace_id: str) -> dict[str, Any]:
|
||||
|
|
@ -408,7 +411,7 @@ class Pyro:
|
|||
vcpu_count: int = DEFAULT_VCPU_COUNT,
|
||||
mem_mib: int = DEFAULT_MEM_MIB,
|
||||
ttl_seconds: int = DEFAULT_TTL_SECONDS,
|
||||
network: bool = False,
|
||||
network_policy: str = DEFAULT_WORKSPACE_NETWORK_POLICY,
|
||||
allow_host_compat: bool = DEFAULT_ALLOW_HOST_COMPAT,
|
||||
seed_path: str | None = None,
|
||||
secrets: list[dict[str, str]] | None = None,
|
||||
|
|
@ -419,7 +422,7 @@ class Pyro:
|
|||
vcpu_count=vcpu_count,
|
||||
mem_mib=mem_mib,
|
||||
ttl_seconds=ttl_seconds,
|
||||
network=network,
|
||||
network_policy=network_policy,
|
||||
allow_host_compat=allow_host_compat,
|
||||
seed_path=seed_path,
|
||||
secrets=secrets,
|
||||
|
|
@ -574,6 +577,7 @@ class Pyro:
|
|||
ready_timeout_seconds: int = 30,
|
||||
ready_interval_ms: int = 500,
|
||||
secret_env: dict[str, str] | None = None,
|
||||
published_ports: list[dict[str, int | None]] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Start a named long-running service inside a workspace."""
|
||||
readiness: dict[str, Any] | None = None
|
||||
|
|
@ -594,6 +598,7 @@ class Pyro:
|
|||
ready_timeout_seconds=ready_timeout_seconds,
|
||||
ready_interval_ms=ready_interval_ms,
|
||||
secret_env=secret_env,
|
||||
published_ports=published_ports,
|
||||
)
|
||||
|
||||
@server.tool()
|
||||
|
|
|
|||
|
|
@ -160,6 +160,7 @@ def _print_workspace_summary_human(payload: dict[str, Any], *, action: str) -> N
|
|||
print(f"Environment: {str(payload.get('environment', 'unknown'))}")
|
||||
print(f"State: {str(payload.get('state', 'unknown'))}")
|
||||
print(f"Workspace: {str(payload.get('workspace_path', '/workspace'))}")
|
||||
print(f"Network policy: {str(payload.get('network_policy', 'off'))}")
|
||||
workspace_seed = payload.get("workspace_seed")
|
||||
if isinstance(workspace_seed, dict):
|
||||
mode = str(workspace_seed.get("mode", "empty"))
|
||||
|
|
@ -378,13 +379,27 @@ def _print_workspace_shell_read_human(payload: dict[str, Any]) -> None:
|
|||
|
||||
|
||||
def _print_workspace_service_summary_human(payload: dict[str, Any], *, prefix: str) -> None:
|
||||
published_ports = payload.get("published_ports")
|
||||
published_text = ""
|
||||
if isinstance(published_ports, list) and published_ports:
|
||||
parts = []
|
||||
for item in published_ports:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
parts.append(
|
||||
f"{str(item.get('host', '127.0.0.1'))}:{int(item.get('host_port', 0))}"
|
||||
f"->{int(item.get('guest_port', 0))}/{str(item.get('protocol', 'tcp'))}"
|
||||
)
|
||||
if parts:
|
||||
published_text = " published_ports=" + ",".join(parts)
|
||||
print(
|
||||
f"[{prefix}] "
|
||||
f"workspace_id={str(payload.get('workspace_id', 'unknown'))} "
|
||||
f"service_name={str(payload.get('service_name', 'unknown'))} "
|
||||
f"state={str(payload.get('state', 'unknown'))} "
|
||||
f"cwd={str(payload.get('cwd', WORKSPACE_GUEST_PATH))} "
|
||||
f"execution_mode={str(payload.get('execution_mode', 'unknown'))}",
|
||||
f"execution_mode={str(payload.get('execution_mode', 'unknown'))}"
|
||||
f"{published_text}",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
|
|
@ -402,6 +417,18 @@ def _print_workspace_service_list_human(payload: dict[str, Any]) -> None:
|
|||
f"{str(service.get('service_name', 'unknown'))} "
|
||||
f"[{str(service.get('state', 'unknown'))}] "
|
||||
f"cwd={str(service.get('cwd', WORKSPACE_GUEST_PATH))}"
|
||||
+ (
|
||||
" published="
|
||||
+ ",".join(
|
||||
f"{str(item.get('host', '127.0.0.1'))}:{int(item.get('host_port', 0))}"
|
||||
f"->{int(item.get('guest_port', 0))}/{str(item.get('protocol', 'tcp'))}"
|
||||
for item in service.get("published_ports", [])
|
||||
if isinstance(item, dict)
|
||||
)
|
||||
if isinstance(service.get("published_ports"), list)
|
||||
and service.get("published_ports")
|
||||
else ""
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -683,6 +710,7 @@ def _build_parser() -> argparse.ArgumentParser:
|
|||
Examples:
|
||||
pyro workspace create debian:12
|
||||
pyro workspace create debian:12 --seed-path ./repo
|
||||
pyro workspace create debian:12 --network-policy egress
|
||||
pyro workspace create debian:12 --secret API_TOKEN=expected
|
||||
pyro workspace sync push WORKSPACE_ID ./changes
|
||||
pyro workspace snapshot create WORKSPACE_ID checkpoint
|
||||
|
|
@ -718,9 +746,10 @@ def _build_parser() -> argparse.ArgumentParser:
|
|||
help="Time-to-live for the workspace before automatic cleanup.",
|
||||
)
|
||||
workspace_create_parser.add_argument(
|
||||
"--network",
|
||||
action="store_true",
|
||||
help="Enable outbound guest networking for the workspace guest.",
|
||||
"--network-policy",
|
||||
choices=("off", "egress", "egress+published-ports"),
|
||||
default="off",
|
||||
help="Workspace network policy.",
|
||||
)
|
||||
workspace_create_parser.add_argument(
|
||||
"--allow-host-compat",
|
||||
|
|
@ -1204,6 +1233,8 @@ def _build_parser() -> argparse.ArgumentParser:
|
|||
Examples:
|
||||
pyro workspace service start WORKSPACE_ID app --ready-file .ready -- \
|
||||
sh -lc 'touch .ready && while true; do sleep 60; done'
|
||||
pyro workspace service start WORKSPACE_ID app --ready-file .ready --publish 8080 -- \
|
||||
sh -lc 'touch .ready && python3 -m http.server 8080'
|
||||
pyro workspace service list WORKSPACE_ID
|
||||
pyro workspace service status WORKSPACE_ID app
|
||||
pyro workspace service logs WORKSPACE_ID app --tail-lines 50
|
||||
|
|
@ -1229,6 +1260,9 @@ def _build_parser() -> argparse.ArgumentParser:
|
|||
Examples:
|
||||
pyro workspace service start WORKSPACE_ID app --ready-file .ready -- \
|
||||
sh -lc 'touch .ready && while true; do sleep 60; done'
|
||||
pyro workspace service start WORKSPACE_ID app \
|
||||
--ready-file .ready --publish 18080:8080 -- \
|
||||
sh -lc 'touch .ready && python3 -m http.server 8080'
|
||||
pyro workspace service start WORKSPACE_ID app --secret-env API_TOKEN -- \
|
||||
sh -lc 'test \"$API_TOKEN\" = \"expected\"; touch .ready; \
|
||||
while true; do sleep 60; done'
|
||||
|
|
@ -1280,6 +1314,16 @@ while true; do sleep 60; done'
|
|||
metavar="SECRET[=ENV_VAR]",
|
||||
help="Expose one persisted workspace secret as an environment variable for this service.",
|
||||
)
|
||||
workspace_service_start_parser.add_argument(
|
||||
"--publish",
|
||||
action="append",
|
||||
default=[],
|
||||
metavar="GUEST_PORT|HOST_PORT:GUEST_PORT",
|
||||
help=(
|
||||
"Publish one guest TCP port on 127.0.0.1. Requires workspace network policy "
|
||||
"`egress+published-ports`."
|
||||
),
|
||||
)
|
||||
workspace_service_start_parser.add_argument(
|
||||
"--json",
|
||||
action="store_true",
|
||||
|
|
@ -1528,6 +1572,33 @@ def _parse_workspace_secret_env_options(values: list[str]) -> dict[str, str]:
|
|||
return parsed
|
||||
|
||||
|
||||
def _parse_workspace_publish_options(values: list[str]) -> list[dict[str, int | None]]:
|
||||
parsed: list[dict[str, int | None]] = []
|
||||
for raw_value in values:
|
||||
candidate = raw_value.strip()
|
||||
if candidate == "":
|
||||
raise ValueError("published ports must not be empty")
|
||||
if ":" in candidate:
|
||||
raw_host_port, raw_guest_port = candidate.split(":", 1)
|
||||
try:
|
||||
host_port = int(raw_host_port)
|
||||
guest_port = int(raw_guest_port)
|
||||
except ValueError as exc:
|
||||
raise ValueError(
|
||||
"published ports must use GUEST_PORT or HOST_PORT:GUEST_PORT"
|
||||
) from exc
|
||||
parsed.append({"host_port": host_port, "guest_port": guest_port})
|
||||
else:
|
||||
try:
|
||||
guest_port = int(candidate)
|
||||
except ValueError as exc:
|
||||
raise ValueError(
|
||||
"published ports must use GUEST_PORT or HOST_PORT:GUEST_PORT"
|
||||
) from exc
|
||||
parsed.append({"host_port": None, "guest_port": guest_port})
|
||||
return parsed
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = _build_parser().parse_args()
|
||||
pyro = Pyro()
|
||||
|
|
@ -1634,7 +1705,7 @@ def main() -> None:
|
|||
vcpu_count=args.vcpu_count,
|
||||
mem_mib=args.mem_mib,
|
||||
ttl_seconds=args.ttl_seconds,
|
||||
network=args.network,
|
||||
network_policy=getattr(args, "network_policy", "off"),
|
||||
allow_host_compat=args.allow_host_compat,
|
||||
seed_path=args.seed_path,
|
||||
secrets=secrets or None,
|
||||
|
|
@ -1932,6 +2003,7 @@ def main() -> None:
|
|||
readiness = {"type": "command", "command": args.ready_command}
|
||||
command = _require_command(args.command_args)
|
||||
secret_env = _parse_workspace_secret_env_options(getattr(args, "secret_env", []))
|
||||
published_ports = _parse_workspace_publish_options(getattr(args, "publish", []))
|
||||
try:
|
||||
payload = pyro.start_service(
|
||||
args.workspace_id,
|
||||
|
|
@ -1942,6 +2014,7 @@ def main() -> None:
|
|||
ready_timeout_seconds=args.ready_timeout_seconds,
|
||||
ready_interval_ms=args.ready_interval_ms,
|
||||
secret_env=secret_env or None,
|
||||
published_ports=published_ports or None,
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
if bool(args.json):
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ PUBLIC_CLI_WORKSPACE_CREATE_FLAGS = (
|
|||
"--vcpu-count",
|
||||
"--mem-mib",
|
||||
"--ttl-seconds",
|
||||
"--network",
|
||||
"--network-policy",
|
||||
"--allow-host-compat",
|
||||
"--seed-path",
|
||||
"--secret",
|
||||
|
|
@ -49,6 +49,7 @@ PUBLIC_CLI_WORKSPACE_SERVICE_START_FLAGS = (
|
|||
"--ready-timeout-seconds",
|
||||
"--ready-interval-ms",
|
||||
"--secret-env",
|
||||
"--publish",
|
||||
"--json",
|
||||
)
|
||||
PUBLIC_CLI_WORKSPACE_SERVICE_STATUS_FLAGS = ("--json",)
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ from typing import Any
|
|||
from pyro_mcp.runtime import DEFAULT_PLATFORM, RuntimePaths
|
||||
|
||||
DEFAULT_ENVIRONMENT_VERSION = "1.0.0"
|
||||
DEFAULT_CATALOG_VERSION = "2.9.0"
|
||||
DEFAULT_CATALOG_VERSION = "2.10.0"
|
||||
OCI_MANIFEST_ACCEPT = ", ".join(
|
||||
(
|
||||
"application/vnd.oci.image.index.v1+json",
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ import shutil
|
|||
import signal
|
||||
import socket
|
||||
import subprocess
|
||||
import sys
|
||||
import tarfile
|
||||
import tempfile
|
||||
import threading
|
||||
|
|
@ -33,6 +34,7 @@ from pyro_mcp.vm_environments import EnvironmentStore, default_cache_dir, get_en
|
|||
from pyro_mcp.vm_firecracker import build_launch_plan
|
||||
from pyro_mcp.vm_guest import VsockExecClient
|
||||
from pyro_mcp.vm_network import NetworkConfig, TapNetworkManager
|
||||
from pyro_mcp.workspace_ports import DEFAULT_PUBLISHED_PORT_HOST
|
||||
from pyro_mcp.workspace_shells import (
|
||||
create_local_shell,
|
||||
get_local_shell,
|
||||
|
|
@ -43,6 +45,7 @@ from pyro_mcp.workspace_shells import (
|
|||
VmState = Literal["created", "started", "stopped"]
|
||||
WorkspaceShellState = Literal["running", "stopped"]
|
||||
WorkspaceServiceState = Literal["running", "exited", "stopped", "failed"]
|
||||
WorkspaceNetworkPolicy = Literal["off", "egress", "egress+published-ports"]
|
||||
|
||||
DEFAULT_VCPU_COUNT = 1
|
||||
DEFAULT_MEM_MIB = 1024
|
||||
|
|
@ -50,7 +53,7 @@ DEFAULT_TIMEOUT_SECONDS = 30
|
|||
DEFAULT_TTL_SECONDS = 600
|
||||
DEFAULT_ALLOW_HOST_COMPAT = False
|
||||
|
||||
WORKSPACE_LAYOUT_VERSION = 6
|
||||
WORKSPACE_LAYOUT_VERSION = 7
|
||||
WORKSPACE_BASELINE_DIRNAME = "baseline"
|
||||
WORKSPACE_BASELINE_ARCHIVE_NAME = "workspace.tar"
|
||||
WORKSPACE_SNAPSHOTS_DIRNAME = "snapshots"
|
||||
|
|
@ -72,6 +75,7 @@ DEFAULT_SHELL_MAX_CHARS = 65536
|
|||
DEFAULT_SERVICE_READY_TIMEOUT_SECONDS = 30
|
||||
DEFAULT_SERVICE_READY_INTERVAL_MS = 500
|
||||
DEFAULT_SERVICE_LOG_TAIL_LINES = 200
|
||||
DEFAULT_WORKSPACE_NETWORK_POLICY: WorkspaceNetworkPolicy = "off"
|
||||
WORKSPACE_SHELL_SIGNAL_NAMES = shell_signal_names()
|
||||
WORKSPACE_SERVICE_NAME_RE = re.compile(r"^[A-Za-z0-9][A-Za-z0-9._-]{0,63}$")
|
||||
WORKSPACE_SNAPSHOT_NAME_RE = re.compile(r"^[A-Za-z0-9][A-Za-z0-9._-]{0,63}$")
|
||||
|
|
@ -117,7 +121,7 @@ class WorkspaceRecord:
|
|||
created_at: float
|
||||
expires_at: float
|
||||
state: VmState
|
||||
network_requested: bool
|
||||
network_policy: WorkspaceNetworkPolicy
|
||||
allow_host_compat: bool
|
||||
firecracker_pid: int | None = None
|
||||
last_error: str | None = None
|
||||
|
|
@ -135,6 +139,7 @@ class WorkspaceRecord:
|
|||
cls,
|
||||
instance: VmInstance,
|
||||
*,
|
||||
network_policy: WorkspaceNetworkPolicy = DEFAULT_WORKSPACE_NETWORK_POLICY,
|
||||
command_count: int = 0,
|
||||
last_command: dict[str, Any] | None = None,
|
||||
workspace_seed: dict[str, Any] | None = None,
|
||||
|
|
@ -149,7 +154,7 @@ class WorkspaceRecord:
|
|||
created_at=instance.created_at,
|
||||
expires_at=instance.expires_at,
|
||||
state=instance.state,
|
||||
network_requested=instance.network_requested,
|
||||
network_policy=network_policy,
|
||||
allow_host_compat=instance.allow_host_compat,
|
||||
firecracker_pid=instance.firecracker_pid,
|
||||
last_error=instance.last_error,
|
||||
|
|
@ -174,7 +179,7 @@ class WorkspaceRecord:
|
|||
expires_at=self.expires_at,
|
||||
workdir=workdir,
|
||||
state=self.state,
|
||||
network_requested=self.network_requested,
|
||||
network_requested=self.network_policy != "off",
|
||||
allow_host_compat=self.allow_host_compat,
|
||||
firecracker_pid=self.firecracker_pid,
|
||||
last_error=self.last_error,
|
||||
|
|
@ -193,7 +198,7 @@ class WorkspaceRecord:
|
|||
"created_at": self.created_at,
|
||||
"expires_at": self.expires_at,
|
||||
"state": self.state,
|
||||
"network_requested": self.network_requested,
|
||||
"network_policy": self.network_policy,
|
||||
"allow_host_compat": self.allow_host_compat,
|
||||
"firecracker_pid": self.firecracker_pid,
|
||||
"last_error": self.last_error,
|
||||
|
|
@ -218,7 +223,7 @@ class WorkspaceRecord:
|
|||
created_at=float(payload["created_at"]),
|
||||
expires_at=float(payload["expires_at"]),
|
||||
state=cast(VmState, str(payload.get("state", "stopped"))),
|
||||
network_requested=bool(payload.get("network_requested", False)),
|
||||
network_policy=_workspace_network_policy_from_payload(payload),
|
||||
allow_host_compat=bool(payload.get("allow_host_compat", DEFAULT_ALLOW_HOST_COMPAT)),
|
||||
firecracker_pid=_optional_int(payload.get("firecracker_pid")),
|
||||
last_error=_optional_str(payload.get("last_error")),
|
||||
|
|
@ -365,6 +370,7 @@ class WorkspaceServiceRecord:
|
|||
pid: int | None = None
|
||||
execution_mode: str = "pending"
|
||||
stop_reason: str | None = None
|
||||
published_ports: list[WorkspacePublishedPortRecord] = field(default_factory=list)
|
||||
metadata: dict[str, str] = field(default_factory=dict)
|
||||
|
||||
def to_payload(self) -> dict[str, Any]:
|
||||
|
|
@ -383,6 +389,9 @@ class WorkspaceServiceRecord:
|
|||
"pid": self.pid,
|
||||
"execution_mode": self.execution_mode,
|
||||
"stop_reason": self.stop_reason,
|
||||
"published_ports": [
|
||||
published_port.to_payload() for published_port in self.published_ports
|
||||
],
|
||||
"metadata": dict(self.metadata),
|
||||
}
|
||||
|
||||
|
|
@ -412,10 +421,53 @@ class WorkspaceServiceRecord:
|
|||
pid=None if payload.get("pid") is None else int(payload.get("pid", 0)),
|
||||
execution_mode=str(payload.get("execution_mode", "pending")),
|
||||
stop_reason=_optional_str(payload.get("stop_reason")),
|
||||
published_ports=_workspace_published_port_records(payload.get("published_ports")),
|
||||
metadata=_string_dict(payload.get("metadata")),
|
||||
)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class WorkspacePublishedPortRecord:
|
||||
"""Persisted localhost published-port metadata for one service."""
|
||||
|
||||
guest_port: int
|
||||
host_port: int
|
||||
host: str = DEFAULT_PUBLISHED_PORT_HOST
|
||||
protocol: str = "tcp"
|
||||
proxy_pid: int | None = None
|
||||
|
||||
def to_payload(self) -> dict[str, Any]:
|
||||
return {
|
||||
"guest_port": self.guest_port,
|
||||
"host_port": self.host_port,
|
||||
"host": self.host,
|
||||
"protocol": self.protocol,
|
||||
"proxy_pid": self.proxy_pid,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_payload(cls, payload: dict[str, Any]) -> WorkspacePublishedPortRecord:
|
||||
return cls(
|
||||
guest_port=int(payload["guest_port"]),
|
||||
host_port=int(payload["host_port"]),
|
||||
host=str(payload.get("host", DEFAULT_PUBLISHED_PORT_HOST)),
|
||||
protocol=str(payload.get("protocol", "tcp")),
|
||||
proxy_pid=(
|
||||
None
|
||||
if payload.get("proxy_pid") is None
|
||||
else int(payload.get("proxy_pid", 0))
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class WorkspacePublishedPortSpec:
|
||||
"""Requested published-port configuration for one service."""
|
||||
|
||||
guest_port: int
|
||||
host_port: int | None = None
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class PreparedWorkspaceSeed:
|
||||
"""Prepared host-side seed archive plus metadata."""
|
||||
|
|
@ -534,6 +586,49 @@ def _workspace_seed_dict(value: object) -> dict[str, Any]:
|
|||
return payload
|
||||
|
||||
|
||||
def _normalize_workspace_network_policy(policy: str) -> WorkspaceNetworkPolicy:
|
||||
normalized = policy.strip().lower()
|
||||
if normalized not in {"off", "egress", "egress+published-ports"}:
|
||||
raise ValueError("network_policy must be one of: off, egress, egress+published-ports")
|
||||
return cast(WorkspaceNetworkPolicy, normalized)
|
||||
|
||||
|
||||
def _workspace_network_policy_from_payload(payload: dict[str, Any]) -> WorkspaceNetworkPolicy:
|
||||
raw_policy = payload.get("network_policy")
|
||||
if raw_policy is not None:
|
||||
return _normalize_workspace_network_policy(str(raw_policy))
|
||||
raw_network_requested = payload.get("network_requested", False)
|
||||
if isinstance(raw_network_requested, str):
|
||||
network_requested = raw_network_requested.strip().lower() in {"1", "true", "yes", "on"}
|
||||
else:
|
||||
network_requested = bool(raw_network_requested)
|
||||
if network_requested:
|
||||
return "egress"
|
||||
return DEFAULT_WORKSPACE_NETWORK_POLICY
|
||||
|
||||
|
||||
def _serialize_workspace_published_port_public(
|
||||
published_port: WorkspacePublishedPortRecord,
|
||||
) -> dict[str, Any]:
|
||||
return {
|
||||
"host": published_port.host,
|
||||
"host_port": published_port.host_port,
|
||||
"guest_port": published_port.guest_port,
|
||||
"protocol": published_port.protocol,
|
||||
}
|
||||
|
||||
|
||||
def _workspace_published_port_records(value: object) -> list[WorkspacePublishedPortRecord]:
|
||||
if not isinstance(value, list):
|
||||
return []
|
||||
records: list[WorkspacePublishedPortRecord] = []
|
||||
for item in value:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
records.append(WorkspacePublishedPortRecord.from_payload(item))
|
||||
return records
|
||||
|
||||
|
||||
def _workspace_secret_records(value: object) -> list[WorkspaceSecretRecord]:
|
||||
if not isinstance(value, list):
|
||||
return []
|
||||
|
|
@ -1159,6 +1254,59 @@ def _normalize_workspace_secret_env_mapping(
|
|||
return normalized
|
||||
|
||||
|
||||
def _normalize_workspace_published_port(
|
||||
*,
|
||||
guest_port: object,
|
||||
host_port: object | None = None,
|
||||
) -> WorkspacePublishedPortSpec:
|
||||
if isinstance(guest_port, bool) or not isinstance(guest_port, int | str):
|
||||
raise ValueError("published guest_port must be an integer")
|
||||
try:
|
||||
normalized_guest_port = int(guest_port)
|
||||
except (TypeError, ValueError) as exc:
|
||||
raise ValueError("published guest_port must be an integer") from exc
|
||||
if normalized_guest_port <= 0 or normalized_guest_port > 65535:
|
||||
raise ValueError("published guest_port must be between 1 and 65535")
|
||||
normalized_host_port: int | None = None
|
||||
if host_port is not None:
|
||||
if isinstance(host_port, bool) or not isinstance(host_port, int | str):
|
||||
raise ValueError("published host_port must be an integer")
|
||||
try:
|
||||
normalized_host_port = int(host_port)
|
||||
except (TypeError, ValueError) as exc:
|
||||
raise ValueError("published host_port must be an integer") from exc
|
||||
if normalized_host_port <= 1024 or normalized_host_port > 65535:
|
||||
raise ValueError("published host_port must be between 1025 and 65535")
|
||||
return WorkspacePublishedPortSpec(
|
||||
guest_port=normalized_guest_port,
|
||||
host_port=normalized_host_port,
|
||||
)
|
||||
|
||||
|
||||
def _normalize_workspace_published_port_specs(
|
||||
published_ports: list[dict[str, Any]] | None,
|
||||
) -> list[WorkspacePublishedPortSpec]:
|
||||
if not published_ports:
|
||||
return []
|
||||
normalized: list[WorkspacePublishedPortSpec] = []
|
||||
seen_guest_ports: set[tuple[int | None, int]] = set()
|
||||
for index, item in enumerate(published_ports, start=1):
|
||||
if not isinstance(item, dict):
|
||||
raise ValueError(f"published port #{index} must be a dictionary")
|
||||
spec = _normalize_workspace_published_port(
|
||||
guest_port=item.get("guest_port"),
|
||||
host_port=item.get("host_port"),
|
||||
)
|
||||
dedupe_key = (spec.host_port, spec.guest_port)
|
||||
if dedupe_key in seen_guest_ports:
|
||||
raise ValueError(
|
||||
"published ports must not repeat the same host/guest port mapping"
|
||||
)
|
||||
seen_guest_ports.add(dedupe_key)
|
||||
normalized.append(spec)
|
||||
return normalized
|
||||
|
||||
|
||||
def _normalize_workspace_service_readiness(
|
||||
readiness: dict[str, Any] | None,
|
||||
) -> dict[str, Any] | None:
|
||||
|
|
@ -1215,6 +1363,15 @@ def _workspace_service_runner_path(services_dir: Path, service_name: str) -> Pat
|
|||
return services_dir / f"{service_name}.runner.sh"
|
||||
|
||||
|
||||
def _workspace_service_port_ready_path(
|
||||
services_dir: Path,
|
||||
service_name: str,
|
||||
host_port: int,
|
||||
guest_port: int,
|
||||
) -> Path:
|
||||
return services_dir / f"{service_name}.port-{host_port}-to-{guest_port}.ready.json"
|
||||
|
||||
|
||||
def _read_service_exit_code(status_path: Path) -> int | None:
|
||||
if not status_path.exists():
|
||||
return None
|
||||
|
|
@ -1348,6 +1505,7 @@ def _refresh_local_service_record(
|
|||
pid=service.pid,
|
||||
execution_mode=service.execution_mode,
|
||||
stop_reason=service.stop_reason,
|
||||
published_ports=list(service.published_ports),
|
||||
metadata=dict(service.metadata),
|
||||
)
|
||||
return refreshed
|
||||
|
|
@ -1466,6 +1624,95 @@ def _stop_local_service(
|
|||
return refreshed
|
||||
|
||||
|
||||
def _start_workspace_published_port_proxy(
|
||||
*,
|
||||
services_dir: Path,
|
||||
service_name: str,
|
||||
workspace_id: str,
|
||||
guest_ip: str,
|
||||
spec: WorkspacePublishedPortSpec,
|
||||
) -> WorkspacePublishedPortRecord:
|
||||
ready_path = _workspace_service_port_ready_path(
|
||||
services_dir,
|
||||
service_name,
|
||||
spec.host_port or 0,
|
||||
spec.guest_port,
|
||||
)
|
||||
ready_path.unlink(missing_ok=True)
|
||||
command = [
|
||||
sys.executable,
|
||||
"-m",
|
||||
"pyro_mcp.workspace_ports",
|
||||
"--listen-host",
|
||||
DEFAULT_PUBLISHED_PORT_HOST,
|
||||
"--listen-port",
|
||||
str(spec.host_port or 0),
|
||||
"--target-host",
|
||||
guest_ip,
|
||||
"--target-port",
|
||||
str(spec.guest_port),
|
||||
"--ready-file",
|
||||
str(ready_path),
|
||||
]
|
||||
process = subprocess.Popen( # noqa: S603
|
||||
command,
|
||||
text=True,
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
start_new_session=True,
|
||||
)
|
||||
deadline = time.monotonic() + 5
|
||||
while time.monotonic() < deadline:
|
||||
if ready_path.exists():
|
||||
payload = json.loads(ready_path.read_text(encoding="utf-8"))
|
||||
if not isinstance(payload, dict):
|
||||
raise RuntimeError("published port proxy ready payload is invalid")
|
||||
ready_path.unlink(missing_ok=True)
|
||||
return WorkspacePublishedPortRecord(
|
||||
guest_port=int(payload.get("target_port", spec.guest_port)),
|
||||
host_port=int(payload["host_port"]),
|
||||
host=str(payload.get("host", DEFAULT_PUBLISHED_PORT_HOST)),
|
||||
protocol=str(payload.get("protocol", "tcp")),
|
||||
proxy_pid=process.pid,
|
||||
)
|
||||
if process.poll() is not None:
|
||||
raise RuntimeError(
|
||||
"failed to start published port proxy for "
|
||||
f"service {service_name!r} in workspace {workspace_id!r}"
|
||||
)
|
||||
time.sleep(0.05)
|
||||
_stop_workspace_published_port_proxy(
|
||||
WorkspacePublishedPortRecord(
|
||||
guest_port=spec.guest_port,
|
||||
host_port=spec.host_port or 0,
|
||||
proxy_pid=process.pid,
|
||||
)
|
||||
)
|
||||
ready_path.unlink(missing_ok=True)
|
||||
raise RuntimeError(
|
||||
"timed out waiting for published port proxy readiness for "
|
||||
f"service {service_name!r} in workspace {workspace_id!r}"
|
||||
)
|
||||
|
||||
|
||||
def _stop_workspace_published_port_proxy(published_port: WorkspacePublishedPortRecord) -> None:
|
||||
if published_port.proxy_pid is None:
|
||||
return
|
||||
try:
|
||||
os.killpg(published_port.proxy_pid, signal.SIGTERM)
|
||||
except ProcessLookupError:
|
||||
return
|
||||
deadline = time.monotonic() + 5
|
||||
while time.monotonic() < deadline:
|
||||
if not _pid_is_running(published_port.proxy_pid):
|
||||
return
|
||||
time.sleep(0.05)
|
||||
try:
|
||||
os.killpg(published_port.proxy_pid, signal.SIGKILL)
|
||||
except ProcessLookupError:
|
||||
return
|
||||
|
||||
|
||||
def _instance_workspace_host_dir(instance: VmInstance) -> Path:
|
||||
raw_value = instance.metadata.get("workspace_host_dir")
|
||||
if raw_value is None or raw_value == "":
|
||||
|
|
@ -3057,13 +3304,14 @@ class VmManager:
|
|||
vcpu_count: int = DEFAULT_VCPU_COUNT,
|
||||
mem_mib: int = DEFAULT_MEM_MIB,
|
||||
ttl_seconds: int = DEFAULT_TTL_SECONDS,
|
||||
network: bool = False,
|
||||
network_policy: WorkspaceNetworkPolicy | str = DEFAULT_WORKSPACE_NETWORK_POLICY,
|
||||
allow_host_compat: bool = DEFAULT_ALLOW_HOST_COMPAT,
|
||||
seed_path: str | Path | None = None,
|
||||
secrets: list[dict[str, str]] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
self._validate_limits(vcpu_count=vcpu_count, mem_mib=mem_mib, ttl_seconds=ttl_seconds)
|
||||
get_environment(environment, runtime_paths=self._runtime_paths)
|
||||
normalized_network_policy = _normalize_workspace_network_policy(str(network_policy))
|
||||
prepared_seed = self._prepare_workspace_seed(seed_path)
|
||||
now = time.time()
|
||||
workspace_id = uuid.uuid4().hex[:12]
|
||||
|
|
@ -3097,12 +3345,13 @@ class VmManager:
|
|||
created_at=now,
|
||||
expires_at=now + ttl_seconds,
|
||||
workdir=runtime_dir,
|
||||
network_requested=network,
|
||||
network_requested=normalized_network_policy != "off",
|
||||
allow_host_compat=allow_host_compat,
|
||||
)
|
||||
instance.metadata["allow_host_compat"] = str(allow_host_compat).lower()
|
||||
instance.metadata["workspace_path"] = WORKSPACE_GUEST_PATH
|
||||
instance.metadata["workspace_host_dir"] = str(host_workspace_dir)
|
||||
instance.metadata["network_policy"] = normalized_network_policy
|
||||
try:
|
||||
with self._lock:
|
||||
self._reap_expired_locked(now)
|
||||
|
|
@ -3112,6 +3361,9 @@ class VmManager:
|
|||
raise RuntimeError(
|
||||
f"max active VMs reached ({self._max_active_vms}); delete old VMs first"
|
||||
)
|
||||
self._require_workspace_network_policy_support(
|
||||
network_policy=normalized_network_policy
|
||||
)
|
||||
self._backend.create(instance)
|
||||
if self._runtime_capabilities.supports_guest_exec:
|
||||
self._ensure_workspace_guest_bootstrap_support(instance)
|
||||
|
|
@ -3119,6 +3371,7 @@ class VmManager:
|
|||
self._start_instance_locked(instance)
|
||||
workspace = WorkspaceRecord.from_instance(
|
||||
instance,
|
||||
network_policy=normalized_network_policy,
|
||||
workspace_seed=prepared_seed.to_payload(),
|
||||
secrets=secret_records,
|
||||
)
|
||||
|
|
@ -3435,6 +3688,9 @@ class VmManager:
|
|||
recreated = workspace.to_instance(
|
||||
workdir=self._workspace_runtime_dir(workspace.workspace_id)
|
||||
)
|
||||
self._require_workspace_network_policy_support(
|
||||
network_policy=workspace.network_policy
|
||||
)
|
||||
self._backend.create(recreated)
|
||||
if self._runtime_capabilities.supports_guest_exec:
|
||||
self._ensure_workspace_guest_bootstrap_support(recreated)
|
||||
|
|
@ -3798,11 +4054,13 @@ class VmManager:
|
|||
ready_timeout_seconds: int = DEFAULT_SERVICE_READY_TIMEOUT_SECONDS,
|
||||
ready_interval_ms: int = DEFAULT_SERVICE_READY_INTERVAL_MS,
|
||||
secret_env: dict[str, str] | None = None,
|
||||
published_ports: list[dict[str, Any]] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
normalized_service_name = _normalize_workspace_service_name(service_name)
|
||||
normalized_cwd, _ = _normalize_workspace_destination(cwd)
|
||||
normalized_readiness = _normalize_workspace_service_readiness(readiness)
|
||||
normalized_secret_env = _normalize_workspace_secret_env_mapping(secret_env)
|
||||
normalized_published_ports = _normalize_workspace_published_port_specs(published_ports)
|
||||
if ready_timeout_seconds <= 0:
|
||||
raise ValueError("ready_timeout_seconds must be positive")
|
||||
if ready_interval_ms <= 0:
|
||||
|
|
@ -3810,6 +4068,16 @@ class VmManager:
|
|||
with self._lock:
|
||||
workspace = self._load_workspace_locked(workspace_id)
|
||||
instance = self._workspace_instance_for_live_service_locked(workspace)
|
||||
if normalized_published_ports:
|
||||
if workspace.network_policy != "egress+published-ports":
|
||||
raise RuntimeError(
|
||||
"published ports require workspace network_policy "
|
||||
"'egress+published-ports'"
|
||||
)
|
||||
if instance.network is None:
|
||||
raise RuntimeError(
|
||||
"published ports require an active guest network configuration"
|
||||
)
|
||||
redact_values = self._workspace_secret_redact_values_locked(workspace)
|
||||
env_values = self._workspace_secret_env_values_locked(workspace, normalized_secret_env)
|
||||
if workspace.secrets and normalized_secret_env:
|
||||
|
|
@ -3852,6 +4120,36 @@ class VmManager:
|
|||
service_name=normalized_service_name,
|
||||
payload=payload,
|
||||
)
|
||||
if normalized_published_ports:
|
||||
assert instance.network is not None # guarded above
|
||||
try:
|
||||
service.published_ports = self._start_workspace_service_published_ports(
|
||||
workspace=workspace,
|
||||
service=service,
|
||||
guest_ip=instance.network.guest_ip,
|
||||
published_ports=normalized_published_ports,
|
||||
)
|
||||
except Exception:
|
||||
try:
|
||||
failed_payload = self._backend.stop_service(
|
||||
instance,
|
||||
workspace_id=workspace_id,
|
||||
service_name=normalized_service_name,
|
||||
)
|
||||
service = self._workspace_service_record_from_payload(
|
||||
workspace_id=workspace_id,
|
||||
service_name=normalized_service_name,
|
||||
payload=failed_payload,
|
||||
published_ports=[],
|
||||
)
|
||||
except Exception:
|
||||
service.state = "failed"
|
||||
service.stop_reason = "published_port_failed"
|
||||
service.ended_at = service.ended_at or time.time()
|
||||
else:
|
||||
service.state = "failed"
|
||||
service.stop_reason = "published_port_failed"
|
||||
service.ended_at = service.ended_at or time.time()
|
||||
with self._lock:
|
||||
workspace = self._load_workspace_locked(workspace_id)
|
||||
workspace.state = instance.state
|
||||
|
|
@ -3916,7 +4214,21 @@ class VmManager:
|
|||
service_name=normalized_service_name,
|
||||
payload=payload,
|
||||
metadata=service.metadata,
|
||||
published_ports=service.published_ports,
|
||||
)
|
||||
if service.published_ports:
|
||||
for published_port in service.published_ports:
|
||||
_stop_workspace_published_port_proxy(published_port)
|
||||
service.published_ports = [
|
||||
WorkspacePublishedPortRecord(
|
||||
guest_port=published_port.guest_port,
|
||||
host_port=published_port.host_port,
|
||||
host=published_port.host,
|
||||
protocol=published_port.protocol,
|
||||
proxy_pid=None,
|
||||
)
|
||||
for published_port in service.published_ports
|
||||
]
|
||||
with self._lock:
|
||||
workspace = self._load_workspace_locked(workspace_id)
|
||||
workspace.state = instance.state
|
||||
|
|
@ -3956,6 +4268,7 @@ class VmManager:
|
|||
service_name=normalized_service_name,
|
||||
payload=payload,
|
||||
metadata=service.metadata,
|
||||
published_ports=service.published_ports,
|
||||
)
|
||||
with self._lock:
|
||||
workspace = self._load_workspace_locked(workspace_id)
|
||||
|
|
@ -4059,6 +4372,7 @@ class VmManager:
|
|||
"created_at": workspace.created_at,
|
||||
"expires_at": workspace.expires_at,
|
||||
"state": workspace.state,
|
||||
"network_policy": workspace.network_policy,
|
||||
"network_enabled": workspace.network is not None,
|
||||
"allow_host_compat": workspace.allow_host_compat,
|
||||
"guest_ip": workspace.network.guest_ip if workspace.network is not None else None,
|
||||
|
|
@ -4107,6 +4421,10 @@ class VmManager:
|
|||
"readiness": dict(service.readiness) if service.readiness is not None else None,
|
||||
"ready_at": service.ready_at,
|
||||
"stop_reason": service.stop_reason,
|
||||
"published_ports": [
|
||||
_serialize_workspace_published_port_public(published_port)
|
||||
for published_port in service.published_ports
|
||||
],
|
||||
}
|
||||
|
||||
def _serialize_workspace_snapshot(self, snapshot: WorkspaceSnapshotRecord) -> dict[str, Any]:
|
||||
|
|
@ -4182,6 +4500,23 @@ class VmManager:
|
|||
f"workspace: {reason}"
|
||||
)
|
||||
|
||||
def _require_workspace_network_policy_support(
|
||||
self,
|
||||
*,
|
||||
network_policy: WorkspaceNetworkPolicy,
|
||||
) -> None:
|
||||
if network_policy == "off":
|
||||
return
|
||||
if self._runtime_capabilities.supports_guest_network:
|
||||
return
|
||||
reason = self._runtime_capabilities.reason or (
|
||||
"runtime does not support guest-backed workspace networking"
|
||||
)
|
||||
raise RuntimeError(
|
||||
"workspace network_policy requires guest networking and is unavailable for this "
|
||||
f"workspace: {reason}"
|
||||
)
|
||||
|
||||
def _workspace_secret_values_locked(self, workspace: WorkspaceRecord) -> dict[str, str]:
|
||||
return _load_workspace_secret_values(
|
||||
workspace_dir=self._workspace_dir(workspace.workspace_id),
|
||||
|
|
@ -4609,9 +4944,15 @@ class VmManager:
|
|||
service_name: str,
|
||||
payload: dict[str, Any],
|
||||
metadata: dict[str, str] | None = None,
|
||||
published_ports: list[WorkspacePublishedPortRecord] | None = None,
|
||||
) -> WorkspaceServiceRecord:
|
||||
readiness_payload = payload.get("readiness")
|
||||
readiness = dict(readiness_payload) if isinstance(readiness_payload, dict) else None
|
||||
normalized_published_ports = _workspace_published_port_records(
|
||||
payload.get("published_ports")
|
||||
)
|
||||
if not normalized_published_ports and published_ports is not None:
|
||||
normalized_published_ports = list(published_ports)
|
||||
return WorkspaceServiceRecord(
|
||||
workspace_id=workspace_id,
|
||||
service_name=str(payload.get("service_name", service_name)),
|
||||
|
|
@ -4632,6 +4973,7 @@ class VmManager:
|
|||
pid=None if payload.get("pid") is None else int(payload.get("pid", 0)),
|
||||
execution_mode=str(payload.get("execution_mode", "pending")),
|
||||
stop_reason=_optional_str(payload.get("stop_reason")),
|
||||
published_ports=normalized_published_ports,
|
||||
metadata=dict(metadata or {}),
|
||||
)
|
||||
|
||||
|
|
@ -4652,6 +4994,33 @@ class VmManager:
|
|||
services = self._list_workspace_services_locked(workspace_id)
|
||||
return len(services), sum(1 for service in services if service.state == "running")
|
||||
|
||||
def _start_workspace_service_published_ports(
|
||||
self,
|
||||
*,
|
||||
workspace: WorkspaceRecord,
|
||||
service: WorkspaceServiceRecord,
|
||||
guest_ip: str,
|
||||
published_ports: list[WorkspacePublishedPortSpec],
|
||||
) -> list[WorkspacePublishedPortRecord]:
|
||||
services_dir = self._workspace_services_dir(workspace.workspace_id)
|
||||
started: list[WorkspacePublishedPortRecord] = []
|
||||
try:
|
||||
for spec in published_ports:
|
||||
started.append(
|
||||
_start_workspace_published_port_proxy(
|
||||
services_dir=services_dir,
|
||||
service_name=service.service_name,
|
||||
workspace_id=workspace.workspace_id,
|
||||
guest_ip=guest_ip,
|
||||
spec=spec,
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
for published_port in started:
|
||||
_stop_workspace_published_port_proxy(published_port)
|
||||
raise
|
||||
return started
|
||||
|
||||
def _workspace_baseline_snapshot_locked(
|
||||
self,
|
||||
workspace: WorkspaceRecord,
|
||||
|
|
@ -4770,12 +5139,18 @@ class VmManager:
|
|||
workspace_id: str,
|
||||
service_name: str,
|
||||
) -> None:
|
||||
existing = self._load_workspace_service_locked_optional(workspace_id, service_name)
|
||||
if existing is not None:
|
||||
for published_port in existing.published_ports:
|
||||
_stop_workspace_published_port_proxy(published_port)
|
||||
self._workspace_service_record_path(workspace_id, service_name).unlink(missing_ok=True)
|
||||
services_dir = self._workspace_services_dir(workspace_id)
|
||||
_workspace_service_stdout_path(services_dir, service_name).unlink(missing_ok=True)
|
||||
_workspace_service_stderr_path(services_dir, service_name).unlink(missing_ok=True)
|
||||
_workspace_service_status_path(services_dir, service_name).unlink(missing_ok=True)
|
||||
_workspace_service_runner_path(services_dir, service_name).unlink(missing_ok=True)
|
||||
for ready_path in services_dir.glob(f"{service_name}.port-*.ready.json"):
|
||||
ready_path.unlink(missing_ok=True)
|
||||
|
||||
def _delete_workspace_snapshot_locked(self, workspace_id: str, snapshot_name: str) -> None:
|
||||
self._workspace_snapshot_metadata_path(workspace_id, snapshot_name).unlink(missing_ok=True)
|
||||
|
|
@ -4881,7 +5256,21 @@ class VmManager:
|
|||
service_name=service.service_name,
|
||||
payload=payload,
|
||||
metadata=service.metadata,
|
||||
published_ports=service.published_ports,
|
||||
)
|
||||
if refreshed.state != "running" and refreshed.published_ports:
|
||||
refreshed.published_ports = [
|
||||
WorkspacePublishedPortRecord(
|
||||
guest_port=published_port.guest_port,
|
||||
host_port=published_port.host_port,
|
||||
host=published_port.host,
|
||||
protocol=published_port.protocol,
|
||||
proxy_pid=None,
|
||||
)
|
||||
for published_port in refreshed.published_ports
|
||||
]
|
||||
for published_port in service.published_ports:
|
||||
_stop_workspace_published_port_proxy(published_port)
|
||||
self._save_workspace_service_locked(refreshed)
|
||||
return refreshed
|
||||
|
||||
|
|
@ -4904,6 +5293,8 @@ class VmManager:
|
|||
changed = False
|
||||
for service in services:
|
||||
if service.state == "running":
|
||||
for published_port in service.published_ports:
|
||||
_stop_workspace_published_port_proxy(published_port)
|
||||
service.state = "stopped"
|
||||
service.stop_reason = "workspace_stopped"
|
||||
service.ended_at = service.ended_at or time.time()
|
||||
|
|
@ -4936,6 +5327,7 @@ class VmManager:
|
|||
service_name=service.service_name,
|
||||
payload=payload,
|
||||
metadata=service.metadata,
|
||||
published_ports=service.published_ports,
|
||||
)
|
||||
self._save_workspace_service_locked(stopped)
|
||||
except Exception:
|
||||
|
|
|
|||
116
src/pyro_mcp/workspace_ports.py
Normal file
116
src/pyro_mcp/workspace_ports.py
Normal file
|
|
@ -0,0 +1,116 @@
|
|||
"""Localhost-only TCP port proxy for published workspace services."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import selectors
|
||||
import signal
|
||||
import socket
|
||||
import socketserver
|
||||
import sys
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
DEFAULT_PUBLISHED_PORT_HOST = "127.0.0.1"
|
||||
|
||||
|
||||
class _ProxyServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
|
||||
allow_reuse_address = False
|
||||
daemon_threads = True
|
||||
|
||||
def __init__(self, server_address: tuple[str, int], target_address: tuple[str, int]) -> None:
|
||||
super().__init__(server_address, _ProxyHandler)
|
||||
self.target_address = target_address
|
||||
|
||||
|
||||
class _ProxyHandler(socketserver.BaseRequestHandler):
|
||||
def handle(self) -> None:
|
||||
server = self.server
|
||||
if not isinstance(server, _ProxyServer):
|
||||
raise RuntimeError("proxy server is invalid")
|
||||
try:
|
||||
upstream = socket.create_connection(server.target_address, timeout=5)
|
||||
except OSError:
|
||||
return
|
||||
with upstream:
|
||||
self.request.setblocking(False)
|
||||
upstream.setblocking(False)
|
||||
selector = selectors.DefaultSelector()
|
||||
try:
|
||||
selector.register(self.request, selectors.EVENT_READ, upstream)
|
||||
selector.register(upstream, selectors.EVENT_READ, self.request)
|
||||
while True:
|
||||
events = selector.select()
|
||||
if not events:
|
||||
continue
|
||||
for key, _ in events:
|
||||
source = key.fileobj
|
||||
target = key.data
|
||||
if not isinstance(source, socket.socket) or not isinstance(
|
||||
target, socket.socket
|
||||
):
|
||||
continue
|
||||
try:
|
||||
chunk = source.recv(65536)
|
||||
except OSError:
|
||||
return
|
||||
if not chunk:
|
||||
return
|
||||
try:
|
||||
target.sendall(chunk)
|
||||
except OSError:
|
||||
return
|
||||
finally:
|
||||
selector.close()
|
||||
|
||||
|
||||
def _build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description="Run a localhost-only TCP port proxy.")
|
||||
parser.add_argument("--listen-host", required=True)
|
||||
parser.add_argument("--listen-port", type=int, required=True)
|
||||
parser.add_argument("--target-host", required=True)
|
||||
parser.add_argument("--target-port", type=int, required=True)
|
||||
parser.add_argument("--ready-file", required=True)
|
||||
return parser
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
args = _build_parser().parse_args(argv)
|
||||
ready_file = Path(args.ready_file)
|
||||
ready_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
server = _ProxyServer(
|
||||
(str(args.listen_host), int(args.listen_port)),
|
||||
(str(args.target_host), int(args.target_port)),
|
||||
)
|
||||
actual_host = str(server.server_address[0])
|
||||
actual_port = int(server.server_address[1])
|
||||
ready_file.write_text(
|
||||
json.dumps(
|
||||
{
|
||||
"host": actual_host,
|
||||
"host_port": actual_port,
|
||||
"target_host": args.target_host,
|
||||
"target_port": int(args.target_port),
|
||||
"protocol": "tcp",
|
||||
},
|
||||
indent=2,
|
||||
sort_keys=True,
|
||||
),
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
def _shutdown(_: int, __: object) -> None:
|
||||
threading.Thread(target=server.shutdown, daemon=True).start()
|
||||
|
||||
signal.signal(signal.SIGTERM, _shutdown)
|
||||
signal.signal(signal.SIGINT, _shutdown)
|
||||
try:
|
||||
server.serve_forever(poll_interval=0.2)
|
||||
finally:
|
||||
server.server_close()
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main(sys.argv[1:]))
|
||||
|
|
@ -123,6 +123,74 @@ def test_pyro_create_vm_defaults_sizing_and_host_compat(tmp_path: Path) -> None:
|
|||
assert created["allow_host_compat"] is True
|
||||
|
||||
|
||||
def test_pyro_workspace_network_policy_and_published_ports_delegate() -> None:
|
||||
calls: list[tuple[str, dict[str, Any]]] = []
|
||||
|
||||
class StubManager:
|
||||
def create_workspace(self, **kwargs: Any) -> dict[str, Any]:
|
||||
calls.append(("create_workspace", kwargs))
|
||||
return {"workspace_id": "workspace-123"}
|
||||
|
||||
def start_service(
|
||||
self,
|
||||
workspace_id: str,
|
||||
service_name: str,
|
||||
**kwargs: Any,
|
||||
) -> dict[str, Any]:
|
||||
calls.append(
|
||||
(
|
||||
"start_service",
|
||||
{
|
||||
"workspace_id": workspace_id,
|
||||
"service_name": service_name,
|
||||
**kwargs,
|
||||
},
|
||||
)
|
||||
)
|
||||
return {"workspace_id": workspace_id, "service_name": service_name, "state": "running"}
|
||||
|
||||
pyro = Pyro(manager=cast(Any, StubManager()))
|
||||
|
||||
pyro.create_workspace(
|
||||
environment="debian:12",
|
||||
network_policy="egress+published-ports",
|
||||
)
|
||||
pyro.start_service(
|
||||
"workspace-123",
|
||||
"web",
|
||||
command="python3 -m http.server 8080",
|
||||
published_ports=[{"guest_port": 8080, "host_port": 18080}],
|
||||
)
|
||||
|
||||
assert calls[0] == (
|
||||
"create_workspace",
|
||||
{
|
||||
"environment": "debian:12",
|
||||
"vcpu_count": 1,
|
||||
"mem_mib": 1024,
|
||||
"ttl_seconds": 600,
|
||||
"network_policy": "egress+published-ports",
|
||||
"allow_host_compat": False,
|
||||
"seed_path": None,
|
||||
"secrets": None,
|
||||
},
|
||||
)
|
||||
assert calls[1] == (
|
||||
"start_service",
|
||||
{
|
||||
"workspace_id": "workspace-123",
|
||||
"service_name": "web",
|
||||
"command": "python3 -m http.server 8080",
|
||||
"cwd": "/workspace",
|
||||
"readiness": None,
|
||||
"ready_timeout_seconds": 30,
|
||||
"ready_interval_ms": 500,
|
||||
"secret_env": None,
|
||||
"published_ports": [{"guest_port": 8080, "host_port": 18080}],
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def test_pyro_workspace_methods_delegate_to_manager(tmp_path: Path) -> None:
|
||||
pyro = Pyro(
|
||||
manager=VmManager(
|
||||
|
|
|
|||
|
|
@ -472,6 +472,7 @@ def test_cli_workspace_create_prints_json(
|
|||
def create_workspace(self, **kwargs: Any) -> dict[str, Any]:
|
||||
assert kwargs["environment"] == "debian:12"
|
||||
assert kwargs["seed_path"] == "./repo"
|
||||
assert kwargs["network_policy"] == "egress"
|
||||
return {"workspace_id": "workspace-123", "state": "started"}
|
||||
|
||||
class StubParser:
|
||||
|
|
@ -483,7 +484,7 @@ def test_cli_workspace_create_prints_json(
|
|||
vcpu_count=1,
|
||||
mem_mib=1024,
|
||||
ttl_seconds=600,
|
||||
network=False,
|
||||
network_policy="egress",
|
||||
allow_host_compat=False,
|
||||
seed_path="./repo",
|
||||
json=True,
|
||||
|
|
@ -506,6 +507,7 @@ def test_cli_workspace_create_prints_human(
|
|||
"workspace_id": "workspace-123",
|
||||
"environment": "debian:12",
|
||||
"state": "started",
|
||||
"network_policy": "off",
|
||||
"workspace_path": "/workspace",
|
||||
"workspace_seed": {
|
||||
"mode": "directory",
|
||||
|
|
@ -530,7 +532,7 @@ def test_cli_workspace_create_prints_human(
|
|||
vcpu_count=1,
|
||||
mem_mib=1024,
|
||||
ttl_seconds=600,
|
||||
network=False,
|
||||
network_policy="off",
|
||||
allow_host_compat=False,
|
||||
seed_path="/tmp/repo",
|
||||
json=False,
|
||||
|
|
@ -2047,12 +2049,21 @@ def test_cli_workspace_service_start_prints_json(
|
|||
assert service_name == "app"
|
||||
assert kwargs["command"] == "sh -lc 'touch .ready && while true; do sleep 60; done'"
|
||||
assert kwargs["readiness"] == {"type": "file", "path": ".ready"}
|
||||
assert kwargs["published_ports"] == [{"host_port": 18080, "guest_port": 8080}]
|
||||
return {
|
||||
"workspace_id": workspace_id,
|
||||
"service_name": service_name,
|
||||
"state": "running",
|
||||
"cwd": "/workspace",
|
||||
"execution_mode": "guest_vsock",
|
||||
"published_ports": [
|
||||
{
|
||||
"host": "127.0.0.1",
|
||||
"host_port": 18080,
|
||||
"guest_port": 8080,
|
||||
"protocol": "tcp",
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
class StartParser:
|
||||
|
|
@ -2070,6 +2081,7 @@ def test_cli_workspace_service_start_prints_json(
|
|||
ready_command=None,
|
||||
ready_timeout_seconds=30,
|
||||
ready_interval_ms=500,
|
||||
publish=["18080:8080"],
|
||||
json=True,
|
||||
command_args=["--", "sh", "-lc", "touch .ready && while true; do sleep 60; done"],
|
||||
)
|
||||
|
|
@ -2149,6 +2161,14 @@ def test_cli_workspace_service_list_prints_human(
|
|||
"state": "running",
|
||||
"cwd": "/workspace",
|
||||
"execution_mode": "guest_vsock",
|
||||
"published_ports": [
|
||||
{
|
||||
"host": "127.0.0.1",
|
||||
"host_port": 18080,
|
||||
"guest_port": 8080,
|
||||
"protocol": "tcp",
|
||||
}
|
||||
],
|
||||
"readiness": {"type": "file", "path": "/workspace/.ready"},
|
||||
},
|
||||
{
|
||||
|
|
@ -2176,7 +2196,7 @@ def test_cli_workspace_service_list_prints_human(
|
|||
monkeypatch.setattr(cli, "Pyro", StubPyro)
|
||||
cli.main()
|
||||
captured = capsys.readouterr()
|
||||
assert "app [running] cwd=/workspace" in captured.out
|
||||
assert "app [running] cwd=/workspace published=127.0.0.1:18080->8080/tcp" in captured.out
|
||||
assert "worker [stopped] cwd=/workspace" in captured.out
|
||||
|
||||
|
||||
|
|
@ -3006,6 +3026,110 @@ def test_cli_workspace_secret_parsers_validate_syntax(tmp_path: Path) -> None:
|
|||
cli._parse_workspace_secret_env_options(["TOKEN", "TOKEN=API_TOKEN"]) # noqa: SLF001
|
||||
|
||||
|
||||
def test_cli_workspace_publish_parser_validates_syntax() -> None:
|
||||
assert cli._parse_workspace_publish_options(["8080"]) == [ # noqa: SLF001
|
||||
{"host_port": None, "guest_port": 8080}
|
||||
]
|
||||
assert cli._parse_workspace_publish_options(["18080:8080"]) == [ # noqa: SLF001
|
||||
{"host_port": 18080, "guest_port": 8080}
|
||||
]
|
||||
|
||||
with pytest.raises(ValueError, match="must not be empty"):
|
||||
cli._parse_workspace_publish_options([" "]) # noqa: SLF001
|
||||
with pytest.raises(ValueError, match="must use GUEST_PORT or HOST_PORT:GUEST_PORT"):
|
||||
cli._parse_workspace_publish_options(["bad"]) # noqa: SLF001
|
||||
with pytest.raises(ValueError, match="must use GUEST_PORT or HOST_PORT:GUEST_PORT"):
|
||||
cli._parse_workspace_publish_options(["bad:8080"]) # noqa: SLF001
|
||||
|
||||
|
||||
def test_cli_workspace_service_start_rejects_multiple_readiness_flags_json(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
capsys: pytest.CaptureFixture[str],
|
||||
) -> None:
|
||||
class StubPyro:
|
||||
def start_service(self, *args: Any, **kwargs: Any) -> dict[str, Any]:
|
||||
raise AssertionError("start_service should not be called")
|
||||
|
||||
class StartParser:
|
||||
def parse_args(self) -> argparse.Namespace:
|
||||
return argparse.Namespace(
|
||||
command="workspace",
|
||||
workspace_command="service",
|
||||
workspace_service_command="start",
|
||||
workspace_id="workspace-123",
|
||||
service_name="app",
|
||||
cwd="/workspace",
|
||||
ready_file=".ready",
|
||||
ready_tcp=None,
|
||||
ready_http="http://127.0.0.1:8080/",
|
||||
ready_command=None,
|
||||
ready_timeout_seconds=30,
|
||||
ready_interval_ms=500,
|
||||
publish=[],
|
||||
json=True,
|
||||
command_args=["--", "sh", "-lc", "touch .ready && while true; do sleep 60; done"],
|
||||
)
|
||||
|
||||
monkeypatch.setattr(cli, "_build_parser", lambda: StartParser())
|
||||
monkeypatch.setattr(cli, "Pyro", StubPyro)
|
||||
with pytest.raises(SystemExit, match="1"):
|
||||
cli.main()
|
||||
payload = json.loads(capsys.readouterr().out)
|
||||
assert "choose at most one" in payload["error"]
|
||||
|
||||
|
||||
def test_cli_workspace_service_start_prints_human_with_ready_http(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
capsys: pytest.CaptureFixture[str],
|
||||
) -> None:
|
||||
class StubPyro:
|
||||
def start_service(
|
||||
self,
|
||||
workspace_id: str,
|
||||
service_name: str,
|
||||
**kwargs: Any,
|
||||
) -> dict[str, Any]:
|
||||
assert workspace_id == "workspace-123"
|
||||
assert service_name == "app"
|
||||
assert kwargs["readiness"] == {"type": "http", "url": "http://127.0.0.1:8080/ready"}
|
||||
return {
|
||||
"workspace_id": workspace_id,
|
||||
"service_name": service_name,
|
||||
"state": "running",
|
||||
"cwd": "/workspace",
|
||||
"execution_mode": "guest_vsock",
|
||||
"readiness": kwargs["readiness"],
|
||||
}
|
||||
|
||||
class StartParser:
|
||||
def parse_args(self) -> argparse.Namespace:
|
||||
return argparse.Namespace(
|
||||
command="workspace",
|
||||
workspace_command="service",
|
||||
workspace_service_command="start",
|
||||
workspace_id="workspace-123",
|
||||
service_name="app",
|
||||
cwd="/workspace",
|
||||
ready_file=None,
|
||||
ready_tcp=None,
|
||||
ready_http="http://127.0.0.1:8080/ready",
|
||||
ready_command=None,
|
||||
ready_timeout_seconds=30,
|
||||
ready_interval_ms=500,
|
||||
publish=[],
|
||||
secret_env=[],
|
||||
json=False,
|
||||
command_args=["--", "sh", "-lc", "while true; do sleep 60; done"],
|
||||
)
|
||||
|
||||
monkeypatch.setattr(cli, "_build_parser", lambda: StartParser())
|
||||
monkeypatch.setattr(cli, "Pyro", StubPyro)
|
||||
cli.main()
|
||||
captured = capsys.readouterr()
|
||||
assert "workspace-service-start" in captured.err
|
||||
assert "service_name=app" in captured.err
|
||||
|
||||
|
||||
def test_print_workspace_summary_human_includes_secret_metadata(
|
||||
capsys: pytest.CaptureFixture[str],
|
||||
) -> None:
|
||||
|
|
|
|||
|
|
@ -1393,6 +1393,775 @@ def test_workspace_service_lifecycle_and_status_counts(tmp_path: Path) -> None:
|
|||
assert deleted["deleted"] is True
|
||||
|
||||
|
||||
def test_workspace_create_serializes_network_policy(tmp_path: Path) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
manager._runtime_capabilities = RuntimeCapabilities( # noqa: SLF001
|
||||
supports_vm_boot=True,
|
||||
supports_guest_exec=True,
|
||||
supports_guest_network=True,
|
||||
)
|
||||
manager._ensure_workspace_guest_bootstrap_support = lambda instance: None # type: ignore[method-assign] # noqa: SLF001
|
||||
|
||||
created = manager.create_workspace(
|
||||
environment="debian:12-base",
|
||||
network_policy="egress",
|
||||
)
|
||||
|
||||
assert created["network_policy"] == "egress"
|
||||
workspace_id = str(created["workspace_id"])
|
||||
workspace_path = tmp_path / "vms" / "workspaces" / workspace_id / "workspace.json"
|
||||
payload = json.loads(workspace_path.read_text(encoding="utf-8"))
|
||||
assert payload["network_policy"] == "egress"
|
||||
|
||||
|
||||
def test_workspace_service_start_serializes_published_ports(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
manager._runtime_capabilities = RuntimeCapabilities( # noqa: SLF001
|
||||
supports_vm_boot=True,
|
||||
supports_guest_exec=True,
|
||||
supports_guest_network=True,
|
||||
)
|
||||
manager._ensure_workspace_guest_bootstrap_support = lambda instance: None # type: ignore[method-assign] # noqa: SLF001
|
||||
created = manager.create_workspace(
|
||||
environment="debian:12-base",
|
||||
network_policy="egress+published-ports",
|
||||
allow_host_compat=True,
|
||||
)
|
||||
workspace_id = str(created["workspace_id"])
|
||||
|
||||
workspace_path = tmp_path / "vms" / "workspaces" / workspace_id / "workspace.json"
|
||||
payload = json.loads(workspace_path.read_text(encoding="utf-8"))
|
||||
payload["network"] = {
|
||||
"vm_id": workspace_id,
|
||||
"tap_name": "tap-test0",
|
||||
"guest_ip": "172.29.1.2",
|
||||
"gateway_ip": "172.29.1.1",
|
||||
"subnet_cidr": "172.29.1.0/30",
|
||||
"mac_address": "06:00:ac:1d:01:02",
|
||||
"dns_servers": ["1.1.1.1"],
|
||||
}
|
||||
workspace_path.write_text(json.dumps(payload, indent=2, sort_keys=True), encoding="utf-8")
|
||||
|
||||
monkeypatch.setattr(
|
||||
manager,
|
||||
"_start_workspace_service_published_ports",
|
||||
lambda **kwargs: [
|
||||
vm_manager_module.WorkspacePublishedPortRecord(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
host="127.0.0.1",
|
||||
protocol="tcp",
|
||||
proxy_pid=9999,
|
||||
)
|
||||
],
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
manager,
|
||||
"_refresh_workspace_liveness_locked",
|
||||
lambda workspace: None,
|
||||
)
|
||||
|
||||
started = manager.start_service(
|
||||
workspace_id,
|
||||
"web",
|
||||
command="sh -lc 'touch .ready && while true; do sleep 60; done'",
|
||||
readiness={"type": "file", "path": ".ready"},
|
||||
published_ports=[{"guest_port": 8080, "host_port": 18080}],
|
||||
)
|
||||
|
||||
assert started["published_ports"] == [
|
||||
{
|
||||
"host": "127.0.0.1",
|
||||
"host_port": 18080,
|
||||
"guest_port": 8080,
|
||||
"protocol": "tcp",
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
def test_workspace_service_start_rejects_published_ports_without_network_policy(
|
||||
tmp_path: Path,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
workspace_id = str(
|
||||
manager.create_workspace(
|
||||
environment="debian:12-base",
|
||||
allow_host_compat=True,
|
||||
)["workspace_id"]
|
||||
)
|
||||
|
||||
with pytest.raises(
|
||||
RuntimeError,
|
||||
match="published ports require workspace network_policy 'egress\\+published-ports'",
|
||||
):
|
||||
manager.start_service(
|
||||
workspace_id,
|
||||
"web",
|
||||
command="sh -lc 'touch .ready && while true; do sleep 60; done'",
|
||||
readiness={"type": "file", "path": ".ready"},
|
||||
published_ports=[{"guest_port": 8080}],
|
||||
)
|
||||
|
||||
|
||||
def test_workspace_service_start_rejects_published_ports_without_active_network(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
manager._runtime_capabilities = RuntimeCapabilities( # noqa: SLF001
|
||||
supports_vm_boot=True,
|
||||
supports_guest_exec=True,
|
||||
supports_guest_network=True,
|
||||
)
|
||||
manager._ensure_workspace_guest_bootstrap_support = lambda instance: None # type: ignore[method-assign] # noqa: SLF001
|
||||
monkeypatch.setattr(
|
||||
manager,
|
||||
"_refresh_workspace_liveness_locked",
|
||||
lambda workspace: None,
|
||||
)
|
||||
workspace_id = str(
|
||||
manager.create_workspace(
|
||||
environment="debian:12-base",
|
||||
network_policy="egress+published-ports",
|
||||
allow_host_compat=True,
|
||||
)["workspace_id"]
|
||||
)
|
||||
|
||||
with pytest.raises(RuntimeError, match="published ports require an active guest network"):
|
||||
manager.start_service(
|
||||
workspace_id,
|
||||
"web",
|
||||
command="sh -lc 'touch .ready && while true; do sleep 60; done'",
|
||||
readiness={"type": "file", "path": ".ready"},
|
||||
published_ports=[{"guest_port": 8080}],
|
||||
)
|
||||
|
||||
|
||||
def test_workspace_service_start_published_port_failure_marks_service_failed(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
manager._runtime_capabilities = RuntimeCapabilities( # noqa: SLF001
|
||||
supports_vm_boot=True,
|
||||
supports_guest_exec=True,
|
||||
supports_guest_network=True,
|
||||
)
|
||||
manager._ensure_workspace_guest_bootstrap_support = lambda instance: None # type: ignore[method-assign] # noqa: SLF001
|
||||
monkeypatch.setattr(
|
||||
manager,
|
||||
"_refresh_workspace_liveness_locked",
|
||||
lambda workspace: None,
|
||||
)
|
||||
created = manager.create_workspace(
|
||||
environment="debian:12-base",
|
||||
network_policy="egress+published-ports",
|
||||
allow_host_compat=True,
|
||||
)
|
||||
workspace_id = str(created["workspace_id"])
|
||||
|
||||
workspace_path = tmp_path / "vms" / "workspaces" / workspace_id / "workspace.json"
|
||||
payload = json.loads(workspace_path.read_text(encoding="utf-8"))
|
||||
payload["network"] = {
|
||||
"vm_id": workspace_id,
|
||||
"tap_name": "tap-test0",
|
||||
"guest_ip": "172.29.1.2",
|
||||
"gateway_ip": "172.29.1.1",
|
||||
"subnet_cidr": "172.29.1.0/30",
|
||||
"mac_address": "06:00:ac:1d:01:02",
|
||||
"dns_servers": ["1.1.1.1"],
|
||||
}
|
||||
workspace_path.write_text(json.dumps(payload, indent=2, sort_keys=True), encoding="utf-8")
|
||||
|
||||
def _raise_proxy_failure(
|
||||
**kwargs: object,
|
||||
) -> list[vm_manager_module.WorkspacePublishedPortRecord]:
|
||||
del kwargs
|
||||
raise RuntimeError("proxy boom")
|
||||
|
||||
monkeypatch.setattr(manager, "_start_workspace_service_published_ports", _raise_proxy_failure)
|
||||
|
||||
started = manager.start_service(
|
||||
workspace_id,
|
||||
"web",
|
||||
command="sh -lc 'touch .ready && while true; do sleep 60; done'",
|
||||
readiness={"type": "file", "path": ".ready"},
|
||||
published_ports=[{"guest_port": 8080, "host_port": 18080}],
|
||||
)
|
||||
|
||||
assert started["state"] == "failed"
|
||||
assert started["stop_reason"] == "published_port_failed"
|
||||
assert started["published_ports"] == []
|
||||
|
||||
|
||||
def test_workspace_service_cleanup_stops_published_port_proxies(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
workspace_id = "workspace-cleanup"
|
||||
service = vm_manager_module.WorkspaceServiceRecord(
|
||||
workspace_id=workspace_id,
|
||||
service_name="web",
|
||||
command="sleep 60",
|
||||
cwd="/workspace",
|
||||
state="running",
|
||||
pid=1234,
|
||||
started_at=time.time(),
|
||||
ended_at=None,
|
||||
exit_code=None,
|
||||
execution_mode="host_compat",
|
||||
readiness=None,
|
||||
ready_at=None,
|
||||
stop_reason=None,
|
||||
published_ports=[
|
||||
vm_manager_module.WorkspacePublishedPortRecord(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
proxy_pid=9999,
|
||||
)
|
||||
],
|
||||
)
|
||||
manager._save_workspace_service_locked(service) # noqa: SLF001
|
||||
stopped: list[int | None] = []
|
||||
monkeypatch.setattr(
|
||||
vm_manager_module,
|
||||
"_stop_workspace_published_port_proxy",
|
||||
lambda published_port: stopped.append(published_port.proxy_pid),
|
||||
)
|
||||
|
||||
manager._delete_workspace_service_artifacts_locked(workspace_id, "web") # noqa: SLF001
|
||||
|
||||
assert stopped == [9999]
|
||||
assert not manager._workspace_service_record_path(workspace_id, "web").exists() # noqa: SLF001
|
||||
|
||||
|
||||
def test_workspace_refresh_workspace_service_counts_stops_published_ports_when_stopped(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
workspace = vm_manager_module.WorkspaceRecord(
|
||||
workspace_id="workspace-counts",
|
||||
environment="debian:12-base",
|
||||
vcpu_count=1,
|
||||
mem_mib=1024,
|
||||
ttl_seconds=600,
|
||||
created_at=time.time(),
|
||||
expires_at=time.time() + 600,
|
||||
state="stopped",
|
||||
firecracker_pid=None,
|
||||
last_error=None,
|
||||
allow_host_compat=True,
|
||||
network_policy="off",
|
||||
metadata={},
|
||||
command_count=0,
|
||||
last_command=None,
|
||||
workspace_seed={
|
||||
"mode": "empty",
|
||||
"seed_path": None,
|
||||
"destination": "/workspace",
|
||||
"entry_count": 0,
|
||||
"bytes_written": 0,
|
||||
},
|
||||
secrets=[],
|
||||
reset_count=0,
|
||||
last_reset_at=None,
|
||||
)
|
||||
service = vm_manager_module.WorkspaceServiceRecord(
|
||||
workspace_id=workspace.workspace_id,
|
||||
service_name="web",
|
||||
command="sleep 60",
|
||||
cwd="/workspace",
|
||||
state="running",
|
||||
pid=1234,
|
||||
started_at=time.time(),
|
||||
ended_at=None,
|
||||
exit_code=None,
|
||||
execution_mode="host_compat",
|
||||
readiness=None,
|
||||
ready_at=None,
|
||||
stop_reason=None,
|
||||
published_ports=[
|
||||
vm_manager_module.WorkspacePublishedPortRecord(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
proxy_pid=9999,
|
||||
)
|
||||
],
|
||||
)
|
||||
manager._save_workspace_service_locked(service) # noqa: SLF001
|
||||
stopped: list[int | None] = []
|
||||
monkeypatch.setattr(
|
||||
vm_manager_module,
|
||||
"_stop_workspace_published_port_proxy",
|
||||
lambda published_port: stopped.append(published_port.proxy_pid),
|
||||
)
|
||||
|
||||
manager._refresh_workspace_service_counts_locked(workspace) # noqa: SLF001
|
||||
|
||||
assert stopped == [9999]
|
||||
refreshed = manager._load_workspace_service_locked(workspace.workspace_id, "web") # noqa: SLF001
|
||||
assert refreshed.state == "stopped"
|
||||
assert refreshed.stop_reason == "workspace_stopped"
|
||||
|
||||
|
||||
def test_workspace_published_port_proxy_helpers(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
services_dir = tmp_path / "services"
|
||||
services_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
class StubProcess:
|
||||
def __init__(self, pid: int, *, exited: bool = False) -> None:
|
||||
self.pid = pid
|
||||
self._exited = exited
|
||||
|
||||
def poll(self) -> int | None:
|
||||
return 1 if self._exited else None
|
||||
|
||||
def _fake_popen(command: list[str], **kwargs: object) -> StubProcess:
|
||||
del kwargs
|
||||
ready_file = Path(command[command.index("--ready-file") + 1])
|
||||
ready_file.write_text(
|
||||
json.dumps(
|
||||
{
|
||||
"host": "127.0.0.1",
|
||||
"host_port": 18080,
|
||||
"target_host": "172.29.1.2",
|
||||
"target_port": 8080,
|
||||
"protocol": "tcp",
|
||||
}
|
||||
),
|
||||
encoding="utf-8",
|
||||
)
|
||||
return StubProcess(4242)
|
||||
|
||||
monkeypatch.setattr(subprocess, "Popen", _fake_popen)
|
||||
|
||||
record = vm_manager_module._start_workspace_published_port_proxy( # noqa: SLF001
|
||||
services_dir=services_dir,
|
||||
service_name="web",
|
||||
workspace_id="workspace-proxy",
|
||||
guest_ip="172.29.1.2",
|
||||
spec=vm_manager_module.WorkspacePublishedPortSpec(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
),
|
||||
)
|
||||
|
||||
assert record.guest_port == 8080
|
||||
assert record.host_port == 18080
|
||||
assert record.proxy_pid == 4242
|
||||
|
||||
|
||||
def test_workspace_published_port_proxy_timeout_and_stop(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
services_dir = tmp_path / "services"
|
||||
services_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
class StubProcess:
|
||||
pid = 4242
|
||||
|
||||
def poll(self) -> int | None:
|
||||
return None
|
||||
|
||||
monkeypatch.setattr(subprocess, "Popen", lambda *args, **kwargs: StubProcess())
|
||||
monotonic_values = iter([0.0, 0.0, 5.1])
|
||||
monkeypatch.setattr(time, "monotonic", lambda: next(monotonic_values))
|
||||
monkeypatch.setattr(time, "sleep", lambda _: None)
|
||||
stopped: list[int | None] = []
|
||||
monkeypatch.setattr(
|
||||
vm_manager_module,
|
||||
"_stop_workspace_published_port_proxy",
|
||||
lambda published_port: stopped.append(published_port.proxy_pid),
|
||||
)
|
||||
|
||||
with pytest.raises(RuntimeError, match="timed out waiting for published port proxy readiness"):
|
||||
vm_manager_module._start_workspace_published_port_proxy( # noqa: SLF001
|
||||
services_dir=services_dir,
|
||||
service_name="web",
|
||||
workspace_id="workspace-proxy",
|
||||
guest_ip="172.29.1.2",
|
||||
spec=vm_manager_module.WorkspacePublishedPortSpec(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
),
|
||||
)
|
||||
|
||||
assert stopped == [4242]
|
||||
|
||||
|
||||
def test_workspace_published_port_validation_and_stop_helper(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
spec = vm_manager_module._normalize_workspace_published_port( # noqa: SLF001
|
||||
guest_port="8080",
|
||||
host_port="18080",
|
||||
)
|
||||
assert spec.guest_port == 8080
|
||||
assert spec.host_port == 18080
|
||||
with pytest.raises(ValueError, match="published guest_port must be an integer"):
|
||||
vm_manager_module._normalize_workspace_published_port(guest_port=object()) # noqa: SLF001
|
||||
with pytest.raises(ValueError, match="published host_port must be between 1025 and 65535"):
|
||||
vm_manager_module._normalize_workspace_published_port( # noqa: SLF001
|
||||
guest_port=8080,
|
||||
host_port=80,
|
||||
)
|
||||
|
||||
signals: list[int] = []
|
||||
monkeypatch.setattr(os, "killpg", lambda pid, sig: signals.append(sig))
|
||||
running = iter([True, False])
|
||||
monkeypatch.setattr(vm_manager_module, "_pid_is_running", lambda pid: next(running))
|
||||
monkeypatch.setattr(time, "sleep", lambda _: None)
|
||||
|
||||
vm_manager_module._stop_workspace_published_port_proxy( # noqa: SLF001
|
||||
vm_manager_module.WorkspacePublishedPortRecord(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
proxy_pid=9999,
|
||||
)
|
||||
)
|
||||
|
||||
assert signals == [signal.SIGTERM]
|
||||
|
||||
|
||||
def test_workspace_network_policy_requires_guest_network_support(tmp_path: Path) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="firecracker",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
manager._runtime_capabilities = RuntimeCapabilities( # noqa: SLF001
|
||||
supports_vm_boot=False,
|
||||
supports_guest_exec=False,
|
||||
supports_guest_network=False,
|
||||
reason="no guest network",
|
||||
)
|
||||
|
||||
with pytest.raises(RuntimeError, match="workspace network_policy requires guest networking"):
|
||||
manager._require_workspace_network_policy_support( # noqa: SLF001
|
||||
network_policy="egress"
|
||||
)
|
||||
|
||||
|
||||
def test_prepare_workspace_seed_rejects_missing_and_invalid_paths(tmp_path: Path) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
|
||||
with pytest.raises(ValueError, match="does not exist"):
|
||||
manager._prepare_workspace_seed(tmp_path / "missing") # noqa: SLF001
|
||||
|
||||
invalid_source = tmp_path / "seed.txt"
|
||||
invalid_source.write_text("seed", encoding="utf-8")
|
||||
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="seed_path must be a directory or a .tar/.tar.gz/.tgz archive",
|
||||
):
|
||||
manager._prepare_workspace_seed(invalid_source) # noqa: SLF001
|
||||
|
||||
|
||||
def test_workspace_baseline_snapshot_requires_archive(tmp_path: Path) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
created = manager.create_workspace(
|
||||
environment="debian:12",
|
||||
vcpu_count=1,
|
||||
mem_mib=512,
|
||||
ttl_seconds=600,
|
||||
allow_host_compat=True,
|
||||
)
|
||||
workspace_id = str(created["workspace_id"])
|
||||
baseline_path = tmp_path / "vms" / "workspaces" / workspace_id / "baseline" / "workspace.tar"
|
||||
baseline_path.unlink()
|
||||
workspace = manager._load_workspace_locked(workspace_id) # noqa: SLF001
|
||||
|
||||
with pytest.raises(RuntimeError, match="baseline snapshot"):
|
||||
manager._workspace_baseline_snapshot_locked(workspace) # noqa: SLF001
|
||||
|
||||
|
||||
def test_workspace_snapshot_and_service_loaders_handle_invalid_payloads(tmp_path: Path) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
workspace_id = "workspace-invalid"
|
||||
services_dir = tmp_path / "vms" / "workspaces" / workspace_id / "services"
|
||||
snapshots_dir = tmp_path / "vms" / "workspaces" / workspace_id / "snapshots"
|
||||
services_dir.mkdir(parents=True, exist_ok=True)
|
||||
snapshots_dir.mkdir(parents=True, exist_ok=True)
|
||||
(services_dir / "svc.json").write_text("[]", encoding="utf-8")
|
||||
(snapshots_dir / "snap.json").write_text("[]", encoding="utf-8")
|
||||
|
||||
with pytest.raises(RuntimeError, match="service record"):
|
||||
manager._load_workspace_service_locked(workspace_id, "svc") # noqa: SLF001
|
||||
with pytest.raises(RuntimeError, match="snapshot record"):
|
||||
manager._load_workspace_snapshot_locked(workspace_id, "snap") # noqa: SLF001
|
||||
with pytest.raises(RuntimeError, match="snapshot record"):
|
||||
manager._load_workspace_snapshot_locked_optional(workspace_id, "snap") # noqa: SLF001
|
||||
assert manager._load_workspace_snapshot_locked_optional(workspace_id, "missing") is None # noqa: SLF001
|
||||
|
||||
|
||||
def test_workspace_shell_helpers_handle_missing_invalid_and_close_errors(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
created = manager.create_workspace(
|
||||
environment="debian:12",
|
||||
vcpu_count=1,
|
||||
mem_mib=512,
|
||||
ttl_seconds=600,
|
||||
allow_host_compat=True,
|
||||
)
|
||||
workspace_id = str(created["workspace_id"])
|
||||
|
||||
assert manager._list_workspace_shells_locked(workspace_id) == [] # noqa: SLF001
|
||||
|
||||
shells_dir = tmp_path / "vms" / "workspaces" / workspace_id / "shells"
|
||||
shells_dir.mkdir(parents=True, exist_ok=True)
|
||||
(shells_dir / "invalid.json").write_text("[]", encoding="utf-8")
|
||||
assert manager._list_workspace_shells_locked(workspace_id) == [] # noqa: SLF001
|
||||
|
||||
shell = vm_manager_module.WorkspaceShellRecord(
|
||||
workspace_id=workspace_id,
|
||||
shell_id="shell-1",
|
||||
cwd="/workspace",
|
||||
cols=120,
|
||||
rows=30,
|
||||
state="running",
|
||||
started_at=time.time(),
|
||||
)
|
||||
manager._save_workspace_shell_locked(shell) # noqa: SLF001
|
||||
workspace = manager._load_workspace_locked(workspace_id) # noqa: SLF001
|
||||
instance = workspace.to_instance(
|
||||
workdir=tmp_path / "vms" / "workspaces" / workspace_id / "runtime"
|
||||
)
|
||||
|
||||
def _raise_close(**kwargs: object) -> dict[str, object]:
|
||||
del kwargs
|
||||
raise RuntimeError("shell close boom")
|
||||
|
||||
monkeypatch.setattr(manager._backend, "close_shell", _raise_close)
|
||||
manager._close_workspace_shells_locked(workspace, instance) # noqa: SLF001
|
||||
assert manager._list_workspace_shells_locked(workspace_id) == [] # noqa: SLF001
|
||||
|
||||
|
||||
def test_workspace_refresh_service_helpers_cover_exit_and_started_refresh(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
created = manager.create_workspace(
|
||||
environment="debian:12",
|
||||
vcpu_count=1,
|
||||
mem_mib=512,
|
||||
ttl_seconds=600,
|
||||
allow_host_compat=True,
|
||||
)
|
||||
workspace_id = str(created["workspace_id"])
|
||||
workspace_path = tmp_path / "vms" / "workspaces" / workspace_id / "workspace.json"
|
||||
payload = json.loads(workspace_path.read_text(encoding="utf-8"))
|
||||
payload["state"] = "started"
|
||||
payload["network"] = {
|
||||
"vm_id": workspace_id,
|
||||
"tap_name": "tap-test0",
|
||||
"guest_ip": "172.29.1.2",
|
||||
"gateway_ip": "172.29.1.1",
|
||||
"subnet_cidr": "172.29.1.0/30",
|
||||
"mac_address": "06:00:ac:1d:01:02",
|
||||
"dns_servers": ["1.1.1.1"],
|
||||
}
|
||||
workspace_path.write_text(json.dumps(payload, indent=2, sort_keys=True), encoding="utf-8")
|
||||
workspace = manager._load_workspace_locked(workspace_id) # noqa: SLF001
|
||||
instance = workspace.to_instance(
|
||||
workdir=tmp_path / "vms" / "workspaces" / workspace_id / "runtime"
|
||||
)
|
||||
|
||||
service = vm_manager_module.WorkspaceServiceRecord(
|
||||
workspace_id=workspace_id,
|
||||
service_name="web",
|
||||
command="sleep 60",
|
||||
cwd="/workspace",
|
||||
state="running",
|
||||
started_at=time.time(),
|
||||
execution_mode="guest_vsock",
|
||||
published_ports=[
|
||||
vm_manager_module.WorkspacePublishedPortRecord(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
proxy_pid=9999,
|
||||
)
|
||||
],
|
||||
)
|
||||
manager._save_workspace_service_locked(service) # noqa: SLF001
|
||||
stopped: list[int | None] = []
|
||||
monkeypatch.setattr(
|
||||
vm_manager_module,
|
||||
"_stop_workspace_published_port_proxy",
|
||||
lambda published_port: stopped.append(published_port.proxy_pid),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
manager._backend,
|
||||
"status_service",
|
||||
lambda *args, **kwargs: {
|
||||
"service_name": "web",
|
||||
"command": "sleep 60",
|
||||
"cwd": "/workspace",
|
||||
"state": "exited",
|
||||
"started_at": service.started_at,
|
||||
"ended_at": service.started_at + 1,
|
||||
"exit_code": 0,
|
||||
"execution_mode": "guest_vsock",
|
||||
},
|
||||
)
|
||||
|
||||
refreshed = manager._refresh_workspace_service_locked( # noqa: SLF001
|
||||
workspace,
|
||||
instance,
|
||||
service,
|
||||
)
|
||||
assert refreshed.state == "exited"
|
||||
assert refreshed.published_ports == [
|
||||
vm_manager_module.WorkspacePublishedPortRecord(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
proxy_pid=None,
|
||||
)
|
||||
]
|
||||
assert stopped == [9999]
|
||||
|
||||
manager._save_workspace_service_locked(service) # noqa: SLF001
|
||||
refreshed_calls: list[str] = []
|
||||
monkeypatch.setattr(manager, "_require_workspace_service_support", lambda instance: None)
|
||||
|
||||
def _refresh_services(
|
||||
workspace: vm_manager_module.WorkspaceRecord,
|
||||
instance: vm_manager_module.VmInstance,
|
||||
) -> list[vm_manager_module.WorkspaceServiceRecord]:
|
||||
del instance
|
||||
refreshed_calls.append(workspace.workspace_id)
|
||||
return []
|
||||
|
||||
monkeypatch.setattr(
|
||||
manager,
|
||||
"_refresh_workspace_services_locked",
|
||||
_refresh_services,
|
||||
)
|
||||
manager._refresh_workspace_service_counts_locked(workspace) # noqa: SLF001
|
||||
assert refreshed_calls == [workspace_id]
|
||||
|
||||
|
||||
def test_workspace_start_published_ports_cleans_up_partial_failure(
|
||||
tmp_path: Path,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
base_dir=tmp_path / "vms",
|
||||
network_manager=TapNetworkManager(enabled=False),
|
||||
)
|
||||
created = manager.create_workspace(
|
||||
environment="debian:12",
|
||||
vcpu_count=1,
|
||||
mem_mib=512,
|
||||
ttl_seconds=600,
|
||||
allow_host_compat=True,
|
||||
)
|
||||
workspace = manager._load_workspace_locked(str(created["workspace_id"])) # noqa: SLF001
|
||||
service = vm_manager_module.WorkspaceServiceRecord(
|
||||
workspace_id=workspace.workspace_id,
|
||||
service_name="web",
|
||||
command="sleep 60",
|
||||
cwd="/workspace",
|
||||
state="running",
|
||||
started_at=time.time(),
|
||||
execution_mode="guest_vsock",
|
||||
)
|
||||
started_record = vm_manager_module.WorkspacePublishedPortRecord(
|
||||
guest_port=8080,
|
||||
host_port=18080,
|
||||
proxy_pid=9999,
|
||||
)
|
||||
calls: list[int] = []
|
||||
|
||||
def _start_proxy(**kwargs: object) -> vm_manager_module.WorkspacePublishedPortRecord:
|
||||
spec = cast(vm_manager_module.WorkspacePublishedPortSpec, kwargs["spec"])
|
||||
if spec.guest_port == 8080:
|
||||
return started_record
|
||||
raise RuntimeError("proxy boom")
|
||||
|
||||
monkeypatch.setattr(vm_manager_module, "_start_workspace_published_port_proxy", _start_proxy)
|
||||
monkeypatch.setattr(
|
||||
vm_manager_module,
|
||||
"_stop_workspace_published_port_proxy",
|
||||
lambda published_port: calls.append(published_port.proxy_pid or -1),
|
||||
)
|
||||
|
||||
with pytest.raises(RuntimeError, match="proxy boom"):
|
||||
manager._start_workspace_service_published_ports( # noqa: SLF001
|
||||
workspace=workspace,
|
||||
service=service,
|
||||
guest_ip="172.29.1.2",
|
||||
published_ports=[
|
||||
vm_manager_module.WorkspacePublishedPortSpec(guest_port=8080),
|
||||
vm_manager_module.WorkspacePublishedPortSpec(guest_port=9090),
|
||||
],
|
||||
)
|
||||
|
||||
assert calls == [9999]
|
||||
|
||||
|
||||
def test_workspace_service_start_replaces_non_running_record(tmp_path: Path) -> None:
|
||||
manager = VmManager(
|
||||
backend_name="mock",
|
||||
|
|
|
|||
289
tests/test_workspace_ports.py
Normal file
289
tests/test_workspace_ports.py
Normal file
|
|
@ -0,0 +1,289 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import selectors
|
||||
import signal
|
||||
import socket
|
||||
import socketserver
|
||||
import threading
|
||||
from pathlib import Path
|
||||
from types import SimpleNamespace
|
||||
from typing import Any, cast
|
||||
|
||||
import pytest
|
||||
|
||||
from pyro_mcp import workspace_ports
|
||||
|
||||
|
||||
class _EchoHandler(socketserver.BaseRequestHandler):
|
||||
def handle(self) -> None:
|
||||
data = self.request.recv(65536)
|
||||
if data:
|
||||
self.request.sendall(data)
|
||||
|
||||
|
||||
def test_workspace_port_proxy_handler_rejects_invalid_server() -> None:
|
||||
handler = workspace_ports._ProxyHandler.__new__(workspace_ports._ProxyHandler) # noqa: SLF001
|
||||
handler.server = cast(Any, object())
|
||||
handler.request = object()
|
||||
|
||||
with pytest.raises(RuntimeError, match="proxy server is invalid"):
|
||||
handler.handle()
|
||||
|
||||
|
||||
def test_workspace_port_proxy_handler_ignores_upstream_connect_failure(
|
||||
monkeypatch: Any,
|
||||
) -> None:
|
||||
handler = workspace_ports._ProxyHandler.__new__(workspace_ports._ProxyHandler) # noqa: SLF001
|
||||
server = workspace_ports._ProxyServer.__new__(workspace_ports._ProxyServer) # noqa: SLF001
|
||||
server.target_address = ("127.0.0.1", 12345)
|
||||
handler.server = server
|
||||
handler.request = object()
|
||||
|
||||
def _raise_connect(*args: Any, **kwargs: Any) -> socket.socket:
|
||||
del args, kwargs
|
||||
raise OSError("boom")
|
||||
|
||||
monkeypatch.setattr(socket, "create_connection", _raise_connect)
|
||||
|
||||
handler.handle()
|
||||
|
||||
|
||||
def test_workspace_port_proxy_forwards_tcp_traffic() -> None:
|
||||
upstream = socketserver.ThreadingTCPServer(
|
||||
(workspace_ports.DEFAULT_PUBLISHED_PORT_HOST, 0),
|
||||
_EchoHandler,
|
||||
)
|
||||
upstream_thread = threading.Thread(target=upstream.serve_forever, daemon=True)
|
||||
upstream_thread.start()
|
||||
upstream_host = str(upstream.server_address[0])
|
||||
upstream_port = int(upstream.server_address[1])
|
||||
proxy = workspace_ports._ProxyServer( # noqa: SLF001
|
||||
(workspace_ports.DEFAULT_PUBLISHED_PORT_HOST, 0),
|
||||
(upstream_host, upstream_port),
|
||||
)
|
||||
proxy_thread = threading.Thread(target=proxy.serve_forever, daemon=True)
|
||||
proxy_thread.start()
|
||||
try:
|
||||
proxy_host = str(proxy.server_address[0])
|
||||
proxy_port = int(proxy.server_address[1])
|
||||
with socket.create_connection((proxy_host, proxy_port), timeout=5) as client:
|
||||
client.sendall(b"hello")
|
||||
received = client.recv(65536)
|
||||
assert received == b"hello"
|
||||
finally:
|
||||
proxy.shutdown()
|
||||
proxy.server_close()
|
||||
upstream.shutdown()
|
||||
upstream.server_close()
|
||||
|
||||
|
||||
def test_workspace_ports_main_writes_ready_file(
|
||||
tmp_path: Path,
|
||||
monkeypatch: Any,
|
||||
) -> None:
|
||||
ready_file = tmp_path / "proxy.ready.json"
|
||||
signals: list[int] = []
|
||||
|
||||
class StubProxyServer:
|
||||
def __init__(
|
||||
self,
|
||||
server_address: tuple[str, int],
|
||||
target_address: tuple[str, int],
|
||||
) -> None:
|
||||
self.server_address = (server_address[0], 18080)
|
||||
self.target_address = target_address
|
||||
|
||||
def serve_forever(self, poll_interval: float = 0.2) -> None:
|
||||
assert poll_interval == 0.2
|
||||
|
||||
def shutdown(self) -> None:
|
||||
return None
|
||||
|
||||
def server_close(self) -> None:
|
||||
return None
|
||||
|
||||
monkeypatch.setattr(workspace_ports, "_ProxyServer", StubProxyServer)
|
||||
monkeypatch.setattr(
|
||||
signal,
|
||||
"signal",
|
||||
lambda signum, handler: signals.append(signum),
|
||||
)
|
||||
|
||||
result = workspace_ports.main(
|
||||
[
|
||||
"--listen-host",
|
||||
"127.0.0.1",
|
||||
"--listen-port",
|
||||
"0",
|
||||
"--target-host",
|
||||
"172.29.1.2",
|
||||
"--target-port",
|
||||
"8080",
|
||||
"--ready-file",
|
||||
str(ready_file),
|
||||
]
|
||||
)
|
||||
|
||||
assert result == 0
|
||||
payload = json.loads(ready_file.read_text(encoding="utf-8"))
|
||||
assert payload == {
|
||||
"host": "127.0.0.1",
|
||||
"host_port": 18080,
|
||||
"protocol": "tcp",
|
||||
"target_host": "172.29.1.2",
|
||||
"target_port": 8080,
|
||||
}
|
||||
assert signals == [signal.SIGTERM, signal.SIGINT]
|
||||
|
||||
|
||||
def test_workspace_ports_main_shutdown_handler_stops_server(
|
||||
tmp_path: Path,
|
||||
monkeypatch: Any,
|
||||
) -> None:
|
||||
ready_file = tmp_path / "proxy.ready.json"
|
||||
shutdown_called: list[bool] = []
|
||||
handlers: dict[int, Any] = {}
|
||||
|
||||
class StubProxyServer:
|
||||
def __init__(
|
||||
self,
|
||||
server_address: tuple[str, int],
|
||||
target_address: tuple[str, int],
|
||||
) -> None:
|
||||
self.server_address = server_address
|
||||
self.target_address = target_address
|
||||
|
||||
def serve_forever(self, poll_interval: float = 0.2) -> None:
|
||||
handlers[signal.SIGTERM](signal.SIGTERM, None)
|
||||
assert poll_interval == 0.2
|
||||
|
||||
def shutdown(self) -> None:
|
||||
shutdown_called.append(True)
|
||||
|
||||
def server_close(self) -> None:
|
||||
return None
|
||||
|
||||
class ImmediateThread:
|
||||
def __init__(self, *, target: Any, daemon: bool) -> None:
|
||||
self._target = target
|
||||
assert daemon is True
|
||||
|
||||
def start(self) -> None:
|
||||
self._target()
|
||||
|
||||
monkeypatch.setattr(workspace_ports, "_ProxyServer", StubProxyServer)
|
||||
monkeypatch.setattr(
|
||||
signal,
|
||||
"signal",
|
||||
lambda signum, handler: handlers.__setitem__(signum, handler),
|
||||
)
|
||||
monkeypatch.setattr(threading, "Thread", ImmediateThread)
|
||||
|
||||
result = workspace_ports.main(
|
||||
[
|
||||
"--listen-host",
|
||||
"127.0.0.1",
|
||||
"--listen-port",
|
||||
"18080",
|
||||
"--target-host",
|
||||
"172.29.1.2",
|
||||
"--target-port",
|
||||
"8080",
|
||||
"--ready-file",
|
||||
str(ready_file),
|
||||
]
|
||||
)
|
||||
|
||||
assert result == 0
|
||||
assert shutdown_called == [True]
|
||||
|
||||
|
||||
def test_workspace_port_proxy_handler_handles_empty_and_invalid_selector_events(
|
||||
monkeypatch: Any,
|
||||
) -> None:
|
||||
source, source_peer = socket.socketpair()
|
||||
upstream, upstream_peer = socket.socketpair()
|
||||
source_peer.close()
|
||||
|
||||
class FakeSelector:
|
||||
def __init__(self) -> None:
|
||||
self._events = iter(
|
||||
[
|
||||
[],
|
||||
[(SimpleNamespace(fileobj=object(), data=object()), None)],
|
||||
[(SimpleNamespace(fileobj=source, data=upstream), None)],
|
||||
]
|
||||
)
|
||||
|
||||
def register(self, *_args: Any, **_kwargs: Any) -> None:
|
||||
return None
|
||||
|
||||
def select(self) -> list[tuple[SimpleNamespace, None]]:
|
||||
return next(self._events)
|
||||
|
||||
def close(self) -> None:
|
||||
return None
|
||||
|
||||
handler = workspace_ports._ProxyHandler.__new__(workspace_ports._ProxyHandler) # noqa: SLF001
|
||||
server = workspace_ports._ProxyServer.__new__(workspace_ports._ProxyServer) # noqa: SLF001
|
||||
server.target_address = ("127.0.0.1", 12345)
|
||||
handler.server = server
|
||||
handler.request = source
|
||||
|
||||
monkeypatch.setattr(socket, "create_connection", lambda *args, **kwargs: upstream)
|
||||
monkeypatch.setattr(selectors, "DefaultSelector", FakeSelector)
|
||||
|
||||
try:
|
||||
handler.handle()
|
||||
finally:
|
||||
source.close()
|
||||
upstream.close()
|
||||
upstream_peer.close()
|
||||
|
||||
|
||||
def test_workspace_port_proxy_handler_handles_recv_and_send_errors(
|
||||
monkeypatch: Any,
|
||||
) -> None:
|
||||
def _run_once(*, close_source: bool) -> None:
|
||||
source, source_peer = socket.socketpair()
|
||||
upstream, upstream_peer = socket.socketpair()
|
||||
if not close_source:
|
||||
source_peer.sendall(b"hello")
|
||||
|
||||
class FakeSelector:
|
||||
def register(self, *_args: Any, **_kwargs: Any) -> None:
|
||||
return None
|
||||
|
||||
def select(self) -> list[tuple[SimpleNamespace, None]]:
|
||||
if close_source:
|
||||
source.close()
|
||||
else:
|
||||
upstream.close()
|
||||
return [(SimpleNamespace(fileobj=source, data=upstream), None)]
|
||||
|
||||
def close(self) -> None:
|
||||
return None
|
||||
|
||||
handler = workspace_ports._ProxyHandler.__new__(workspace_ports._ProxyHandler) # noqa: SLF001
|
||||
server = workspace_ports._ProxyServer.__new__(workspace_ports._ProxyServer) # noqa: SLF001
|
||||
server.target_address = ("127.0.0.1", 12345)
|
||||
handler.server = server
|
||||
handler.request = source
|
||||
|
||||
monkeypatch.setattr(socket, "create_connection", lambda *args, **kwargs: upstream)
|
||||
monkeypatch.setattr(selectors, "DefaultSelector", FakeSelector)
|
||||
|
||||
try:
|
||||
handler.handle()
|
||||
finally:
|
||||
source_peer.close()
|
||||
if close_source:
|
||||
upstream.close()
|
||||
upstream_peer.close()
|
||||
else:
|
||||
source.close()
|
||||
upstream_peer.close()
|
||||
|
||||
_run_once(close_source=True)
|
||||
_run_once(close_source=False)
|
||||
2
uv.lock
generated
2
uv.lock
generated
|
|
@ -706,7 +706,7 @@ crypto = [
|
|||
|
||||
[[package]]
|
||||
name = "pyro-mcp"
|
||||
version = "2.9.0"
|
||||
version = "2.10.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "mcp" },
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue