Refactor public API around environments

This commit is contained in:
Thales Maciel 2026-03-08 16:02:02 -03:00
parent 57dae52cc2
commit 5d5243df23
41 changed files with 1301 additions and 459 deletions

21
LICENSE Normal file
View file

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2026 Thales Maciel
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -64,6 +64,7 @@ check: lint typecheck test
dist-check:
.venv/bin/pyro --version
.venv/bin/pyro --help >/dev/null
.venv/bin/pyro env list >/dev/null
demo:
uv run pyro demo

179
README.md
View file

@ -1,21 +1,20 @@
# pyro-mcp
`pyro-mcp` is a Firecracker-backed sandbox for coding agents.
`pyro-mcp` runs commands inside ephemeral Firecracker microVMs using curated Linux environments such as `debian:12`.
It exposes the same runtime in two public forms:
It exposes the same runtime in three public forms:
- a `pyro` CLI
- a Python SDK via `from pyro_mcp import Pyro`
It also ships an MCP server so LLM clients can use the same VM runtime through tools.
- the `pyro` CLI
- the Python SDK via `from pyro_mcp import Pyro`
- an MCP server so LLM clients can call VM tools directly
## Start Here
- Install: [docs/install.md](/home/thales/projects/personal/pyro/docs/install.md)
- Host requirements: [docs/host-requirements.md](/home/thales/projects/personal/pyro/docs/host-requirements.md)
- Integration targets: [docs/integrations.md](/home/thales/projects/personal/pyro/docs/integrations.md)
- Public contract: [docs/public-contract.md](/home/thales/projects/personal/pyro/docs/public-contract.md)
- Troubleshooting: [docs/troubleshooting.md](/home/thales/projects/personal/pyro/docs/troubleshooting.md)
- Install: [docs/install.md](docs/install.md)
- Host requirements: [docs/host-requirements.md](docs/host-requirements.md)
- Integration targets: [docs/integrations.md](docs/integrations.md)
- Public contract: [docs/public-contract.md](docs/public-contract.md)
- Troubleshooting: [docs/troubleshooting.md](docs/troubleshooting.md)
## Public UX
@ -34,80 +33,41 @@ pyro mcp serve
The public user-facing interface is `pyro` and `Pyro`.
`Makefile` targets are contributor conveniences for this repository and are not the primary product UX.
Check the installed CLI version:
## Official Environments
```bash
pyro --version
```
Current curated environments in this repository:
## Repository Storage
- `debian:12`
- `debian:12-base`
- `debian:12-build`
This repository uses Git LFS for the packaged runtime images under
`src/pyro_mcp/runtime_bundle/`.
Fresh contributor setup:
```bash
git lfs install
git clone <repo>
cd pyro
git lfs pull
make setup
```
The large files tracked through LFS are:
- `src/pyro_mcp/runtime_bundle/**/rootfs.ext4`
- `src/pyro_mcp/runtime_bundle/**/vmlinux`
If you are working from an older clone created before the LFS migration, reclone or realign your branch to the rewritten history before doing more work.
## Capabilities
- Firecracker microVM execution with bundled runtime artifacts
- standard profiles:
- `debian-base`
- `debian-git`
- `debian-build`
- high-level one-shot execution via `vm_run` / `Pyro.run_in_vm(...)`
- low-level lifecycle control when needed:
- `vm_create`
- `vm_start`
- `vm_exec`
- `vm_stop`
- `vm_delete`
- `vm_status`
- `vm_network_info`
- `vm_reap_expired`
- outbound guest networking with explicit opt-in
## Requirements
- Linux host
- `/dev/kvm`
- Python 3.12+
- host privilege for TAP/NAT setup when using guest networking
The current implementation uses `sudo -n` for `ip`, `nft`, and `iptables` when networked runs are requested.
The package ships the embedded Firecracker runtime and a package-controlled environment catalog.
Environment artifacts are installed into a local cache on first use or through `pyro env pull`.
## CLI
Start the MCP server:
List available environments:
```bash
pyro mcp serve
pyro env list
```
Prefetch one environment:
```bash
pyro env pull debian:12
```
Run one command in an ephemeral VM:
```bash
pyro run --profile debian-git --vcpu-count 1 --mem-mib 1024 -- git --version
pyro run debian:12 --vcpu-count 1 --mem-mib 1024 -- git --version
```
Run with outbound internet enabled:
```bash
pyro run --profile debian-git --vcpu-count 1 --mem-mib 1024 --network -- \
pyro run debian:12 --vcpu-count 1 --mem-mib 1024 --network -- \
"git clone --depth 1 https://github.com/octocat/Hello-World.git hello-world && git -C hello-world rev-parse --is-inside-work-tree"
```
@ -132,23 +92,6 @@ ollama pull llama:3.2-3b
pyro demo ollama
```
Verbose Ollama logs:
```bash
pyro demo ollama -v
```
## Integration Examples
- Python one-shot SDK example: [examples/python_run.py](/home/thales/projects/personal/pyro/examples/python_run.py)
- Python lifecycle example: [examples/python_lifecycle.py](/home/thales/projects/personal/pyro/examples/python_lifecycle.py)
- MCP client config example: [examples/mcp_client_config.md](/home/thales/projects/personal/pyro/examples/mcp_client_config.md)
- Claude Desktop MCP config: [examples/claude_desktop_mcp_config.json](/home/thales/projects/personal/pyro/examples/claude_desktop_mcp_config.json)
- Cursor MCP config: [examples/cursor_mcp_config.json](/home/thales/projects/personal/pyro/examples/cursor_mcp_config.json)
- OpenAI Responses API example: [examples/openai_responses_vm_run.py](/home/thales/projects/personal/pyro/examples/openai_responses_vm_run.py)
- LangChain wrapper example: [examples/langchain_vm_run.py](/home/thales/projects/personal/pyro/examples/langchain_vm_run.py)
- Agent-ready `vm_run` example: [examples/agent_vm_run.py](/home/thales/projects/personal/pyro/examples/agent_vm_run.py)
## Python SDK
```python
@ -156,7 +99,7 @@ from pyro_mcp import Pyro
pyro = Pyro()
result = pyro.run_in_vm(
profile="debian-git",
environment="debian:12",
command="git --version",
vcpu_count=1,
mem_mib=1024,
@ -173,7 +116,7 @@ from pyro_mcp import Pyro
pyro = Pyro()
created = pyro.create_vm(
profile="debian-git",
environment="debian:12",
vcpu_count=1,
mem_mib=1024,
ttl_seconds=600,
@ -185,19 +128,26 @@ result = pyro.exec_vm(vm_id, command="git --version", timeout_seconds=30)
print(result["stdout"])
```
The recommended agent-facing default is still one-shot execution through `run_in_vm(...)` / `vm_run`.
Use lifecycle methods only when the agent needs VM state to persist across multiple calls.
Environment management is also available through the SDK:
```python
from pyro_mcp import Pyro
pyro = Pyro()
print(pyro.list_environments())
print(pyro.inspect_environment("debian:12"))
```
## MCP Tools
Primary agent-facing tool:
- `vm_run(profile, command, vcpu_count, mem_mib, timeout_seconds=30, ttl_seconds=600, network=false)`
- `vm_run(environment, command, vcpu_count, mem_mib, timeout_seconds=30, ttl_seconds=600, network=false)`
Advanced lifecycle tools:
- `vm_list_profiles()`
- `vm_create(profile, vcpu_count, mem_mib, ttl_seconds=600, network=false)`
- `vm_list_environments()`
- `vm_create(environment, vcpu_count, mem_mib, ttl_seconds=600, network=false)`
- `vm_start(vm_id)`
- `vm_exec(vm_id, command, timeout_seconds=30)`
- `vm_stop(vm_id)`
@ -206,31 +156,28 @@ Advanced lifecycle tools:
- `vm_network_info(vm_id)`
- `vm_reap_expired()`
## Integration Examples
- Python one-shot SDK example: [examples/python_run.py](examples/python_run.py)
- Python lifecycle example: [examples/python_lifecycle.py](examples/python_lifecycle.py)
- MCP client config example: [examples/mcp_client_config.md](examples/mcp_client_config.md)
- Claude Desktop MCP config: [examples/claude_desktop_mcp_config.json](examples/claude_desktop_mcp_config.json)
- Cursor MCP config: [examples/cursor_mcp_config.json](examples/cursor_mcp_config.json)
- OpenAI Responses API example: [examples/openai_responses_vm_run.py](examples/openai_responses_vm_run.py)
- LangChain wrapper example: [examples/langchain_vm_run.py](examples/langchain_vm_run.py)
- Agent-ready `vm_run` example: [examples/agent_vm_run.py](examples/agent_vm_run.py)
## Runtime
The package ships a bundled Linux x86_64 runtime payload with:
The package ships an embedded Linux x86_64 runtime payload with:
- Firecracker
- Jailer
- guest kernel
- guest agent
- profile rootfs images
- runtime manifest and diagnostics
No system Firecracker installation is required.
Runtime diagnostics:
```bash
pyro doctor
```
The doctor report includes:
- runtime integrity
- component versions
- capability flags
- KVM availability
- host networking prerequisites
`pyro` installs curated environments into a local cache and reports their status through `pyro env inspect` and `pyro doctor`.
## Contributor Workflow
@ -243,18 +190,4 @@ make check
make dist-check
```
Runtime build and validation helpers remain available through `make`, including:
- `make runtime-bundle`
- `make runtime-materialize`
- `make runtime-boot-check`
- `make runtime-network-check`
Space cleanup after runtime work:
```bash
rm -rf build
git lfs prune
```
Recreating `.venv/` is also a straightforward way to reclaim local disk if needed.
Contributor runtime source artifacts are still maintained under `src/pyro_mcp/runtime_bundle/` and `runtime_sources/`.

View file

@ -2,9 +2,8 @@
## Requirements
- Linux host
- Linux x86_64 host
- Python 3.12+
- Git LFS
- `/dev/kvm`
If you want outbound guest networking:
@ -21,10 +20,16 @@ Run the MCP server directly from the package without a manual install:
uvx --from pyro-mcp pyro mcp serve
```
Run one command in a sandbox:
Run one command in a curated environment:
```bash
uvx --from pyro-mcp pyro run --profile debian-git --vcpu-count 1 --mem-mib 1024 -- git --version
uvx --from pyro-mcp pyro run debian:12 --vcpu-count 1 --mem-mib 1024 -- git --version
```
Inspect the official environment catalog:
```bash
uvx --from pyro-mcp pyro env list
```
## Installed CLI
@ -32,6 +37,7 @@ uvx --from pyro-mcp pyro run --profile debian-git --vcpu-count 1 --mem-mib 1024
```bash
uv tool install .
pyro --version
pyro env list
pyro doctor
```

View file

@ -29,7 +29,7 @@ Recommended surface:
Canonical example:
- [examples/openai_responses_vm_run.py](/home/thales/projects/personal/pyro/examples/openai_responses_vm_run.py)
- [examples/openai_responses_vm_run.py](../examples/openai_responses_vm_run.py)
## MCP Clients
@ -45,9 +45,9 @@ Recommended entrypoint:
Starter config:
- [examples/mcp_client_config.md](/home/thales/projects/personal/pyro/examples/mcp_client_config.md)
- [examples/claude_desktop_mcp_config.json](/home/thales/projects/personal/pyro/examples/claude_desktop_mcp_config.json)
- [examples/cursor_mcp_config.json](/home/thales/projects/personal/pyro/examples/cursor_mcp_config.json)
- [examples/mcp_client_config.md](../examples/mcp_client_config.md)
- [examples/claude_desktop_mcp_config.json](../examples/claude_desktop_mcp_config.json)
- [examples/cursor_mcp_config.json](../examples/cursor_mcp_config.json)
## Direct Python SDK
@ -63,8 +63,8 @@ Recommended default:
Examples:
- [examples/python_run.py](/home/thales/projects/personal/pyro/examples/python_run.py)
- [examples/python_lifecycle.py](/home/thales/projects/personal/pyro/examples/python_lifecycle.py)
- [examples/python_run.py](../examples/python_run.py)
- [examples/python_lifecycle.py](../examples/python_lifecycle.py)
## Agent Framework Wrappers
@ -87,7 +87,7 @@ Recommended pattern:
Concrete example:
- [examples/langchain_vm_run.py](/home/thales/projects/personal/pyro/examples/langchain_vm_run.py)
- [examples/langchain_vm_run.py](../examples/langchain_vm_run.py)
## Selection Rule

View file

@ -1,6 +1,6 @@
# Public Contract
This document defines the supported public interface for `pyro-mcp`.
This document defines the supported public interface for `pyro-mcp` `1.x`.
## Package Identity
@ -12,15 +12,19 @@ This document defines the supported public interface for `pyro-mcp`.
Top-level commands:
- `pyro env list`
- `pyro env pull`
- `pyro env inspect`
- `pyro env prune`
- `pyro mcp serve`
- `pyro run`
- `pyro doctor`
- `pyro demo`
- `pyro demo ollama`
Stable `pyro run` flags:
Stable `pyro run` interface:
- `--profile`
- positional environment name
- `--vcpu-count`
- `--mem-mib`
- `--timeout-seconds`
@ -29,7 +33,8 @@ Stable `pyro run` flags:
Behavioral guarantees:
- `pyro run -- <command>` returns structured JSON.
- `pyro run <environment> -- <command>` returns structured JSON.
- `pyro env list`, `pyro env pull`, `pyro env inspect`, and `pyro env prune` return structured JSON.
- `pyro doctor` returns structured JSON diagnostics.
- `pyro demo ollama` prints log lines plus a final summary line.
@ -42,7 +47,10 @@ Primary facade:
Supported public methods:
- `create_server()`
- `list_profiles()`
- `list_environments()`
- `pull_environment(environment)`
- `inspect_environment(environment)`
- `prune_environments()`
- `create_vm(...)`
- `start_vm(vm_id)`
- `exec_vm(vm_id, *, command, timeout_seconds=30)`
@ -61,7 +69,7 @@ Primary tool:
Advanced lifecycle tools:
- `vm_list_profiles`
- `vm_list_environments`
- `vm_create`
- `vm_start`
- `vm_exec`
@ -71,6 +79,8 @@ Advanced lifecycle tools:
- `vm_network_info`
- `vm_reap_expired`
## Compatibility Rule
## Versioning Rule
Changes to any command name, public flag, public method name, or MCP tool name are breaking changes and should be treated as a deliberate contract version change.
- `pyro-mcp` uses SemVer.
- Environment names are stable identifiers in the shipped catalog.
- Changing a public command name, public flag, public method name, public MCP tool name, or required request field is a breaking change.

View file

@ -1,21 +1,25 @@
# Troubleshooting
## `pyro doctor` reports runtime checksum mismatch
## `pyro env pull` or first-run install fails
Cause:
- the Git LFS pointer files are present, but the real runtime images have not been checked out
- the environment cache directory is not writable
- the configured environment source is unavailable
- the environment download was interrupted
Fix:
```bash
git lfs pull
git lfs checkout
pyro doctor
pyro env inspect debian:12
pyro env prune
pyro env pull debian:12
```
## `pyro run --network` fails before the guest starts
Cause:
- the host cannot create TAP devices or NAT rules
Fix:
@ -31,9 +35,22 @@ Then verify:
- `/dev/net/tun`
- host privilege for `sudo -n`
## `pyro doctor` reports runtime issues
Cause:
- the embedded Firecracker runtime files are missing or corrupted
Fix:
- reinstall the package
- verify `pyro doctor` reports `runtime_ok: true`
- if you are working from a source checkout, ensure large runtime artifacts are present with `git lfs pull`
## Ollama demo exits with tool-call failures
Cause:
- the model produced an invalid tool call or your Ollama model is not reliable enough for tool use
Fix:
@ -47,18 +64,3 @@ Inspect:
- model output
- requested tool calls
- tool results
## Repository clone is still huge after the LFS migration
Cause:
- old refs are still present locally
- `build/` or `.venv/` duplicates are consuming disk
Fix:
```bash
rm -rf build
git lfs prune
```
If needed, recreate `.venv/`.

View file

@ -13,7 +13,7 @@ VM_RUN_TOOL: dict[str, Any] = {
"input_schema": {
"type": "object",
"properties": {
"profile": {"type": "string"},
"environment": {"type": "string"},
"command": {"type": "string"},
"vcpu_count": {"type": "integer"},
"mem_mib": {"type": "integer"},
@ -21,7 +21,7 @@ VM_RUN_TOOL: dict[str, Any] = {
"ttl_seconds": {"type": "integer", "default": 600},
"network": {"type": "boolean", "default": False},
},
"required": ["profile", "command", "vcpu_count", "mem_mib"],
"required": ["environment", "command", "vcpu_count", "mem_mib"],
},
}
@ -29,7 +29,7 @@ VM_RUN_TOOL: dict[str, Any] = {
def call_vm_run(arguments: dict[str, Any]) -> dict[str, Any]:
pyro = Pyro()
return pyro.run_in_vm(
profile=str(arguments["profile"]),
environment=str(arguments["environment"]),
command=str(arguments["command"]),
vcpu_count=int(arguments["vcpu_count"]),
mem_mib=int(arguments["mem_mib"]),
@ -41,7 +41,7 @@ def call_vm_run(arguments: dict[str, Any]) -> dict[str, Any]:
def main() -> None:
tool_arguments: dict[str, Any] = {
"profile": "debian-git",
"environment": "debian:12",
"command": "git --version",
"vcpu_count": 1,
"mem_mib": 1024,

View file

@ -19,7 +19,7 @@ F = TypeVar("F", bound=Callable[..., Any])
def run_vm_run_tool(
*,
profile: str,
environment: str,
command: str,
vcpu_count: int,
mem_mib: int,
@ -29,7 +29,7 @@ def run_vm_run_tool(
) -> str:
pyro = Pyro()
result = pyro.run_in_vm(
profile=profile,
environment=environment,
command=command,
vcpu_count=vcpu_count,
mem_mib=mem_mib,
@ -53,17 +53,17 @@ def build_langchain_vm_run_tool() -> Any:
@decorator
def vm_run(
profile: str,
environment: str,
command: str,
vcpu_count: int,
mem_mib: int,
timeout_seconds: int = 30,
ttl_seconds: int = 600,
network: bool = False,
) -> str:
) -> str:
"""Run one command in an ephemeral Firecracker VM and clean it up."""
return run_vm_run_tool(
profile=profile,
environment=environment,
command=command,
vcpu_count=vcpu_count,
mem_mib=mem_mib,

View file

@ -36,5 +36,5 @@ Use lifecycle tools only when the agent needs persistent VM state across multipl
Concrete client-specific examples:
- Claude Desktop: [examples/claude_desktop_mcp_config.json](/home/thales/projects/personal/pyro/examples/claude_desktop_mcp_config.json)
- Cursor: [examples/cursor_mcp_config.json](/home/thales/projects/personal/pyro/examples/cursor_mcp_config.json)
- Claude Desktop: [examples/claude_desktop_mcp_config.json](claude_desktop_mcp_config.json)
- Cursor: [examples/cursor_mcp_config.json](cursor_mcp_config.json)

View file

@ -26,7 +26,7 @@ OPENAI_VM_RUN_TOOL: dict[str, Any] = {
"parameters": {
"type": "object",
"properties": {
"profile": {"type": "string"},
"environment": {"type": "string"},
"command": {"type": "string"},
"vcpu_count": {"type": "integer"},
"mem_mib": {"type": "integer"},
@ -34,7 +34,7 @@ OPENAI_VM_RUN_TOOL: dict[str, Any] = {
"ttl_seconds": {"type": "integer"},
"network": {"type": "boolean"},
},
"required": ["profile", "command", "vcpu_count", "mem_mib"],
"required": ["environment", "command", "vcpu_count", "mem_mib"],
"additionalProperties": False,
},
}
@ -43,7 +43,7 @@ OPENAI_VM_RUN_TOOL: dict[str, Any] = {
def call_vm_run(arguments: dict[str, Any]) -> dict[str, Any]:
pyro = Pyro()
return pyro.run_in_vm(
profile=str(arguments["profile"]),
environment=str(arguments["environment"]),
command=str(arguments["command"]),
vcpu_count=int(arguments["vcpu_count"]),
mem_mib=int(arguments["mem_mib"]),
@ -88,7 +88,7 @@ def main() -> None:
model = os.environ.get("OPENAI_MODEL", DEFAULT_MODEL)
prompt = (
"Use the vm_run tool to run `git --version` in an ephemeral VM. "
"Use the debian-git profile with 1 vCPU and 1024 MiB of memory. "
"Use the `debian:12` environment with 1 vCPU and 1024 MiB of memory. "
"Do not use networking for this request."
)
print(run_openai_vm_run_example(prompt=prompt, model=model))

View file

@ -10,7 +10,7 @@ from pyro_mcp import Pyro
def main() -> None:
pyro = Pyro()
created = pyro.create_vm(
profile="debian-git",
environment="debian:12",
vcpu_count=1,
mem_mib=1024,
ttl_seconds=600,

View file

@ -10,7 +10,7 @@ from pyro_mcp import Pyro
def main() -> None:
pyro = Pyro()
result = pyro.run_in_vm(
profile="debian-git",
environment="debian:12",
command="git --version",
vcpu_count=1,
mem_mib=1024,

View file

@ -1,16 +1,33 @@
[project]
name = "pyro-mcp"
version = "0.1.0"
description = "MCP tools for ephemeral VM lifecycle management."
version = "1.0.0"
description = "Curated Linux environments for ephemeral Firecracker-backed VM execution."
readme = "README.md"
license = { file = "LICENSE" }
authors = [
{ name = "Thales Maciel", email = "thales@thalesmaciel.com" }
]
requires-python = ">=3.12"
classifiers = [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Testing",
"Topic :: System :: Systems Administration",
]
dependencies = [
"mcp>=1.26.0",
]
[project.urls]
Homepage = "https://git.thaloco.com/thaloco/pyro-mcp"
Repository = "https://git.thaloco.com/thaloco/pyro-mcp"
Issues = "https://git.thaloco.com/thaloco/pyro-mcp/issues"
[project.scripts]
pyro = "pyro_mcp.cli:main"
@ -22,15 +39,23 @@ build-backend = "hatchling.build"
packages = ["src/pyro_mcp"]
[tool.hatch.build.targets.wheel.force-include]
"src/pyro_mcp/runtime_bundle" = "pyro_mcp/runtime_bundle"
"src/pyro_mcp/runtime_bundle/NOTICE" = "pyro_mcp/runtime_bundle/NOTICE"
"src/pyro_mcp/runtime_bundle/linux-x86_64/bin/firecracker" = "pyro_mcp/runtime_bundle/linux-x86_64/bin/firecracker"
"src/pyro_mcp/runtime_bundle/linux-x86_64/bin/jailer" = "pyro_mcp/runtime_bundle/linux-x86_64/bin/jailer"
"src/pyro_mcp/runtime_bundle/linux-x86_64/guest/pyro_guest_agent.py" = "pyro_mcp/runtime_bundle/linux-x86_64/guest/pyro_guest_agent.py"
"src/pyro_mcp/runtime_bundle/linux-x86_64/manifest.json" = "pyro_mcp/runtime_bundle/linux-x86_64/manifest.json"
[tool.hatch.build.targets.sdist]
include = [
"docs/**",
"src/pyro_mcp/runtime_bundle/**",
"runtime_sources/**",
"src/pyro_mcp/**/*.py",
"src/pyro_mcp/runtime_bundle/NOTICE",
"src/pyro_mcp/runtime_bundle/linux-x86_64/bin/firecracker",
"src/pyro_mcp/runtime_bundle/linux-x86_64/bin/jailer",
"src/pyro_mcp/runtime_bundle/linux-x86_64/guest/pyro_guest_agent.py",
"src/pyro_mcp/runtime_bundle/linux-x86_64/manifest.json",
"README.md",
"LICENSE",
"AGENTS.md",
"pyproject.toml",
]

View file

@ -1,5 +1,5 @@
{
"bundle_version": "0.1.0",
"bundle_version": "1.0.0",
"platform": "linux-x86_64",
"component_versions": {
"firecracker": "1.12.1",

View file

@ -1,11 +1,36 @@
"""Public package surface for pyro_mcp."""
from importlib.metadata import version
from __future__ import annotations
import tomllib
from importlib.metadata import PackageNotFoundError, version
from pathlib import Path
from pyro_mcp.api import Pyro
from pyro_mcp.server import create_server
from pyro_mcp.vm_manager import VmManager
__version__ = version("pyro-mcp")
def _resolve_version() -> str:
try:
installed_version = version("pyro-mcp")
except PackageNotFoundError:
installed_version = None
pyproject_path = Path(__file__).resolve().parents[2] / "pyproject.toml"
if pyproject_path.exists():
payload = tomllib.loads(pyproject_path.read_text(encoding="utf-8"))
project = payload.get("project")
if isinstance(project, dict):
raw_version = project.get("version")
if isinstance(raw_version, str) and raw_version != "":
return raw_version
if installed_version is None:
return "0+unknown"
return installed_version
__version__ = _resolve_version()
__all__ = ["Pyro", "VmManager", "__version__", "create_server"]

View file

@ -19,13 +19,13 @@ class Pyro:
*,
backend_name: str | None = None,
base_dir: Path | None = None,
artifacts_dir: Path | None = None,
cache_dir: Path | None = None,
max_active_vms: int = 4,
) -> None:
self._manager = manager or VmManager(
backend_name=backend_name,
base_dir=base_dir,
artifacts_dir=artifacts_dir,
cache_dir=cache_dir,
max_active_vms=max_active_vms,
)
@ -33,20 +33,29 @@ class Pyro:
def manager(self) -> VmManager:
return self._manager
def list_profiles(self) -> list[dict[str, object]]:
return self._manager.list_profiles()
def list_environments(self) -> list[dict[str, object]]:
return self._manager.list_environments()
def pull_environment(self, environment: str) -> dict[str, object]:
return self._manager.pull_environment(environment)
def inspect_environment(self, environment: str) -> dict[str, object]:
return self._manager.inspect_environment(environment)
def prune_environments(self) -> dict[str, object]:
return self._manager.prune_environments()
def create_vm(
self,
*,
profile: str,
environment: str,
vcpu_count: int,
mem_mib: int,
ttl_seconds: int = 600,
network: bool = False,
) -> dict[str, Any]:
return self._manager.create_vm(
profile=profile,
environment=environment,
vcpu_count=vcpu_count,
mem_mib=mem_mib,
ttl_seconds=ttl_seconds,
@ -77,7 +86,7 @@ class Pyro:
def run_in_vm(
self,
*,
profile: str,
environment: str,
command: str,
vcpu_count: int,
mem_mib: int,
@ -86,7 +95,7 @@ class Pyro:
network: bool = False,
) -> dict[str, Any]:
return self._manager.run_vm(
profile=profile,
environment=environment,
command=command,
vcpu_count=vcpu_count,
mem_mib=mem_mib,
@ -100,7 +109,7 @@ class Pyro:
@server.tool()
async def vm_run(
profile: str,
environment: str,
command: str,
vcpu_count: int,
mem_mib: int,
@ -110,7 +119,7 @@ class Pyro:
) -> dict[str, Any]:
"""Create, start, execute, and clean up an ephemeral VM."""
return self.run_in_vm(
profile=profile,
environment=environment,
command=command,
vcpu_count=vcpu_count,
mem_mib=mem_mib,
@ -120,21 +129,21 @@ class Pyro:
)
@server.tool()
async def vm_list_profiles() -> list[dict[str, object]]:
"""List standard environment profiles and package highlights."""
return self.list_profiles()
async def vm_list_environments() -> list[dict[str, object]]:
"""List curated Linux environments and installation status."""
return self.list_environments()
@server.tool()
async def vm_create(
profile: str,
environment: str,
vcpu_count: int,
mem_mib: int,
ttl_seconds: int = 600,
network: bool = False,
) -> dict[str, Any]:
"""Create an ephemeral VM record with profile and resource sizing."""
"""Create an ephemeral VM record with environment and resource sizing."""
return self.create_vm(
profile=profile,
environment=environment,
vcpu_count=vcpu_count,
mem_mib=mem_mib,
ttl_seconds=ttl_seconds,

View file

@ -11,6 +11,7 @@ from pyro_mcp.api import Pyro
from pyro_mcp.demo import run_demo
from pyro_mcp.ollama_demo import DEFAULT_OLLAMA_BASE_URL, DEFAULT_OLLAMA_MODEL, run_ollama_tool_demo
from pyro_mcp.runtime import DEFAULT_PLATFORM, doctor_report
from pyro_mcp.vm_environments import DEFAULT_CATALOG_VERSION
def _print_json(payload: dict[str, Any]) -> None:
@ -18,22 +19,36 @@ def _print_json(payload: dict[str, Any]) -> None:
def _build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description="pyro CLI for ephemeral Firecracker VMs.")
parser = argparse.ArgumentParser(
description="pyro CLI for curated ephemeral Linux environments."
)
parser.add_argument("--version", action="version", version=f"%(prog)s {__version__}")
subparsers = parser.add_subparsers(dest="command", required=True)
env_parser = subparsers.add_parser("env", help="Inspect and manage curated environments.")
env_subparsers = env_parser.add_subparsers(dest="env_command", required=True)
env_subparsers.add_parser("list", help="List official environments.")
pull_parser = env_subparsers.add_parser(
"pull",
help="Install an environment into the local cache.",
)
pull_parser.add_argument("environment")
inspect_parser = env_subparsers.add_parser("inspect", help="Inspect one environment.")
inspect_parser.add_argument("environment")
env_subparsers.add_parser("prune", help="Delete stale cached environments.")
mcp_parser = subparsers.add_parser("mcp", help="Run the MCP server.")
mcp_subparsers = mcp_parser.add_subparsers(dest="mcp_command", required=True)
mcp_subparsers.add_parser("serve", help="Run the MCP server over stdio.")
run_parser = subparsers.add_parser("run", help="Run one command inside an ephemeral VM.")
run_parser.add_argument("--profile", required=True)
run_parser.add_argument("environment")
run_parser.add_argument("--vcpu-count", type=int, required=True)
run_parser.add_argument("--mem-mib", type=int, required=True)
run_parser.add_argument("--timeout-seconds", type=int, default=30)
run_parser.add_argument("--ttl-seconds", type=int, default=600)
run_parser.add_argument("--network", action="store_true")
run_parser.add_argument("command_args", nargs=argparse.REMAINDER)
run_parser.add_argument("command_args", nargs="*")
doctor_parser = subparsers.add_parser("doctor", help="Inspect runtime and host diagnostics.")
doctor_parser.add_argument("--platform", default=DEFAULT_PLATFORM)
@ -59,13 +74,32 @@ def _require_command(command_args: list[str]) -> str:
def main() -> None:
args = _build_parser().parse_args()
pyro = Pyro()
if args.command == "env":
if args.env_command == "list":
_print_json(
{
"catalog_version": DEFAULT_CATALOG_VERSION,
"environments": pyro.list_environments(),
}
)
return
if args.env_command == "pull":
_print_json(dict(pyro.pull_environment(args.environment)))
return
if args.env_command == "inspect":
_print_json(dict(pyro.inspect_environment(args.environment)))
return
if args.env_command == "prune":
_print_json(dict(pyro.prune_environments()))
return
if args.command == "mcp":
Pyro().create_server().run(transport="stdio")
pyro.create_server().run(transport="stdio")
return
if args.command == "run":
command = _require_command(args.command_args)
result = Pyro().run_in_vm(
profile=args.profile,
result = pyro.run_in_vm(
environment=args.environment,
command=command,
vcpu_count=args.vcpu_count,
mem_mib=args.mem_mib,

View file

@ -2,10 +2,10 @@
from __future__ import annotations
PUBLIC_CLI_COMMANDS = ("mcp", "run", "doctor", "demo")
PUBLIC_CLI_COMMANDS = ("demo", "doctor", "env", "mcp", "run")
PUBLIC_CLI_DEMO_SUBCOMMANDS = ("ollama",)
PUBLIC_CLI_ENV_SUBCOMMANDS = ("inspect", "list", "pull", "prune")
PUBLIC_CLI_RUN_FLAGS = (
"--profile",
"--vcpu-count",
"--mem-mib",
"--timeout-seconds",
@ -18,8 +18,11 @@ PUBLIC_SDK_METHODS = (
"create_vm",
"delete_vm",
"exec_vm",
"list_profiles",
"inspect_environment",
"list_environments",
"network_info_vm",
"prune_environments",
"pull_environment",
"reap_expired",
"run_in_vm",
"start_vm",
@ -28,14 +31,14 @@ PUBLIC_SDK_METHODS = (
)
PUBLIC_MCP_TOOLS = (
"vm_run",
"vm_list_profiles",
"vm_create",
"vm_start",
"vm_exec",
"vm_stop",
"vm_delete",
"vm_status",
"vm_exec",
"vm_list_environments",
"vm_network_info",
"vm_reap_expired",
"vm_run",
"vm_start",
"vm_status",
"vm_stop",
)

View file

@ -28,7 +28,7 @@ def run_demo(*, network: bool = False) -> dict[str, Any]:
"execution_mode": "guest_vsock" if network else "host_compat",
}
return pyro.run_in_vm(
profile="debian-git",
environment="debian:12",
command=_demo_command(status),
vcpu_count=1,
mem_mib=512,

View file

@ -32,7 +32,7 @@ TOOL_SPECS: Final[list[dict[str, Any]]] = [
"parameters": {
"type": "object",
"properties": {
"profile": {"type": "string"},
"environment": {"type": "string"},
"command": {"type": "string"},
"vcpu_count": {"type": "integer"},
"mem_mib": {"type": "integer"},
@ -40,7 +40,7 @@ TOOL_SPECS: Final[list[dict[str, Any]]] = [
"ttl_seconds": {"type": "integer"},
"network": {"type": "boolean"},
},
"required": ["profile", "command", "vcpu_count", "mem_mib"],
"required": ["environment", "command", "vcpu_count", "mem_mib"],
"additionalProperties": False,
},
},
@ -48,8 +48,8 @@ TOOL_SPECS: Final[list[dict[str, Any]]] = [
{
"type": "function",
"function": {
"name": "vm_list_profiles",
"description": "List standard VM environment profiles.",
"name": "vm_list_environments",
"description": "List curated Linux environments and installation status.",
"parameters": {
"type": "object",
"properties": {},
@ -65,13 +65,13 @@ TOOL_SPECS: Final[list[dict[str, Any]]] = [
"parameters": {
"type": "object",
"properties": {
"profile": {"type": "string"},
"environment": {"type": "string"},
"vcpu_count": {"type": "integer"},
"mem_mib": {"type": "integer"},
"ttl_seconds": {"type": "integer"},
"network": {"type": "boolean"},
},
"required": ["profile", "vcpu_count", "mem_mib"],
"required": ["environment", "vcpu_count", "mem_mib"],
"additionalProperties": False,
},
},
@ -206,7 +206,7 @@ def _dispatch_tool_call(
ttl_seconds = arguments.get("ttl_seconds", 600)
timeout_seconds = arguments.get("timeout_seconds", 30)
return pyro.run_in_vm(
profile=_require_str(arguments, "profile"),
environment=_require_str(arguments, "environment"),
command=_require_str(arguments, "command"),
vcpu_count=_require_int(arguments, "vcpu_count"),
mem_mib=_require_int(arguments, "mem_mib"),
@ -214,12 +214,12 @@ def _dispatch_tool_call(
ttl_seconds=_require_int({"ttl_seconds": ttl_seconds}, "ttl_seconds"),
network=_require_bool(arguments, "network", default=False),
)
if tool_name == "vm_list_profiles":
return {"profiles": pyro.list_profiles()}
if tool_name == "vm_list_environments":
return {"environments": pyro.list_environments()}
if tool_name == "vm_create":
ttl_seconds = arguments.get("ttl_seconds", 600)
return pyro.create_vm(
profile=_require_str(arguments, "profile"),
environment=_require_str(arguments, "environment"),
vcpu_count=_require_int(arguments, "vcpu_count"),
mem_mib=_require_int(arguments, "mem_mib"),
ttl_seconds=_require_int({"ttl_seconds": ttl_seconds}, "ttl_seconds"),
@ -256,7 +256,7 @@ def _format_tool_error(tool_name: str, arguments: dict[str, Any], exc: Exception
def _run_direct_lifecycle_fallback(pyro: Pyro) -> dict[str, Any]:
return pyro.run_in_vm(
profile="debian-git",
environment="debian:12",
command=NETWORK_PROOF_COMMAND,
vcpu_count=1,
mem_mib=512,
@ -326,7 +326,7 @@ def run_ollama_tool_demo(
"content": (
"Use the VM tools to prove outbound internet access in an ephemeral VM.\n"
"Prefer `vm_run` unless a lower-level lifecycle step is strictly necessary.\n"
"Use profile `debian-git`, choose adequate vCPU/memory, "
"Use environment `debian:12`, choose adequate vCPU/memory, "
"and set `network` to true.\n"
f"Run this exact command: `{NETWORK_PROOF_COMMAND}`.\n"
f"Success means the clone completes and the command prints `true`.\n"

View file

@ -1,4 +1,4 @@
"""Bundled runtime resolver and diagnostics."""
"""Embedded runtime resolver and diagnostics."""
from __future__ import annotations
@ -64,7 +64,7 @@ def resolve_runtime_paths(
platform: str = DEFAULT_PLATFORM,
verify_checksums: bool = True,
) -> RuntimePaths:
"""Resolve and validate bundled runtime assets."""
"""Resolve and validate embedded runtime assets."""
bundle_parent = Path(os.environ.get("PYRO_RUNTIME_BUNDLE_DIR", _default_bundle_parent()))
bundle_root = bundle_parent / platform
manifest_path = bundle_root / "manifest.json"
@ -102,7 +102,7 @@ def resolve_runtime_paths(
guest_agent_path = bundle_root / raw_agent_path
artifacts_dir = bundle_root / "profiles"
required_paths = [firecracker_bin, jailer_bin, artifacts_dir]
required_paths = [firecracker_bin, jailer_bin]
if guest_agent_path is not None:
required_paths.append(guest_agent_path)
@ -139,30 +139,6 @@ def resolve_runtime_paths(
f"runtime checksum mismatch for {full_path}; "
f"expected {raw_hash}, got {actual}"
)
profiles = manifest.get("profiles")
if not isinstance(profiles, dict):
raise RuntimeError("runtime manifest is missing `profiles`")
for profile_name, profile_spec in profiles.items():
if not isinstance(profile_spec, dict):
raise RuntimeError(f"profile manifest entry for {profile_name!r} is malformed")
for kind in ("kernel", "rootfs"):
spec = profile_spec.get(kind)
if not isinstance(spec, dict):
raise RuntimeError(f"profile {profile_name!r} is missing {kind} spec")
raw_path = spec.get("path")
raw_hash = spec.get("sha256")
if not isinstance(raw_path, str) or not isinstance(raw_hash, str):
raise RuntimeError(f"profile {profile_name!r} {kind} spec is malformed")
full_path = bundle_root / raw_path
if not full_path.exists():
raise RuntimeError(f"profile asset missing: {full_path}")
actual = _sha256(full_path)
if actual != raw_hash:
raise RuntimeError(
f"profile checksum mismatch for {full_path}; "
f"expected {raw_hash}, got {actual}"
)
return RuntimePaths(
bundle_root=bundle_root,
manifest_path=manifest_path,
@ -241,9 +217,9 @@ def doctor_report(*, platform: str = DEFAULT_PLATFORM) -> dict[str, Any]:
return report
capabilities = runtime_capabilities(paths)
from pyro_mcp.vm_environments import EnvironmentStore
profiles = paths.manifest.get("profiles", {})
profile_names = sorted(profiles.keys()) if isinstance(profiles, dict) else []
environment_store = EnvironmentStore(runtime_paths=paths)
report["runtime_ok"] = True
report["runtime"] = {
"bundle_root": str(paths.bundle_root),
@ -252,16 +228,19 @@ def doctor_report(*, platform: str = DEFAULT_PLATFORM) -> dict[str, Any]:
"jailer_bin": str(paths.jailer_bin),
"guest_agent_path": str(paths.guest_agent_path) if paths.guest_agent_path else None,
"artifacts_dir": str(paths.artifacts_dir),
"artifacts_present": paths.artifacts_dir.exists(),
"notice_path": str(paths.notice_path),
"bundle_version": paths.manifest.get("bundle_version"),
"component_versions": paths.manifest.get("component_versions", {}),
"profiles": profile_names,
"capabilities": {
"supports_vm_boot": capabilities.supports_vm_boot,
"supports_guest_exec": capabilities.supports_guest_exec,
"supports_guest_network": capabilities.supports_guest_network,
"reason": capabilities.reason,
},
"catalog_version": environment_store.catalog_version,
"cache_dir": str(environment_store.cache_dir),
"environments": environment_store.list_environments(),
}
if not report["kvm"]["exists"]:
report["issues"] = ["/dev/kvm is not available on this host"]

View file

@ -1,4 +1,4 @@
"""Direct Firecracker boot validation for a bundled runtime profile."""
"""Direct Firecracker boot validation for a curated environment."""
from __future__ import annotations
@ -12,13 +12,13 @@ from pathlib import Path
from types import SimpleNamespace
from pyro_mcp.runtime import resolve_runtime_paths
from pyro_mcp.vm_environments import EnvironmentStore, get_environment
from pyro_mcp.vm_firecracker import build_launch_plan
from pyro_mcp.vm_profiles import get_profile
@dataclass(frozen=True)
class BootCheckResult:
profile: str
environment: str
workdir: Path
firecracker_started: bool
vm_alive_after_wait: bool
@ -49,30 +49,31 @@ def _classify_result(*, firecracker_log: str, serial_log: str, vm_alive: bool) -
def run_boot_check(
*,
profile: str = "debian-base",
environment: str = "debian:12-base",
vcpu_count: int = 1,
mem_mib: int = 1024,
wait_seconds: int = 8,
keep_workdir: bool = False,
) -> BootCheckResult: # pragma: no cover - integration helper
get_profile(profile)
get_environment(environment)
if wait_seconds <= 0:
raise ValueError("wait_seconds must be positive")
runtime_paths = resolve_runtime_paths()
profile_dir = runtime_paths.artifacts_dir / profile
environment_store = EnvironmentStore(runtime_paths=runtime_paths)
installed_environment = environment_store.ensure_installed(environment)
workdir = Path(tempfile.mkdtemp(prefix="pyro-boot-check-"))
try:
rootfs_copy = workdir / "rootfs.ext4"
shutil.copy2(profile_dir / "rootfs.ext4", rootfs_copy)
shutil.copy2(installed_environment.rootfs_image, rootfs_copy)
instance = SimpleNamespace(
vm_id="abcd00000001",
vcpu_count=vcpu_count,
mem_mib=mem_mib,
workdir=workdir,
metadata={
"kernel_image": str(profile_dir / "vmlinux"),
"kernel_image": str(installed_environment.kernel_image),
"rootfs_image": str(rootfs_copy),
},
network=None,
@ -114,7 +115,7 @@ def run_boot_check(
vm_alive=vm_alive,
)
return BootCheckResult(
profile=profile,
environment=environment,
workdir=workdir,
firecracker_started="Successfully started microvm" in firecracker_log,
vm_alive_after_wait=vm_alive,
@ -131,7 +132,7 @@ def run_boot_check(
def main() -> None: # pragma: no cover - CLI wiring
parser = argparse.ArgumentParser(description="Run a direct Firecracker boot check.")
parser.add_argument("--profile", default="debian-base")
parser.add_argument("--environment", default="debian:12-base")
parser.add_argument("--vcpu-count", type=int, default=1)
parser.add_argument("--mem-mib", type=int, default=1024)
parser.add_argument("--wait-seconds", type=int, default=8)
@ -140,13 +141,13 @@ def main() -> None: # pragma: no cover - CLI wiring
args = parser.parse_args()
result = run_boot_check(
profile=args.profile,
environment=args.environment,
vcpu_count=args.vcpu_count,
mem_mib=args.mem_mib,
wait_seconds=args.wait_seconds,
keep_workdir=args.keep_workdir,
)
print(f"[boot] profile={result.profile}")
print(f"[boot] environment={result.environment}")
print(f"[boot] firecracker_started={result.firecracker_started}")
print(f"[boot] vm_alive_after_wait={result.vm_alive_after_wait}")
print(f"[boot] process_returncode={result.process_returncode}")

View file

@ -9,7 +9,7 @@
"sha256": "86622337f91df329cca72bb21cd1324fb8b6fa47931601d65ee4b2c72ef2cae5"
}
},
"bundle_version": "0.1.0",
"bundle_version": "1.0.0",
"capabilities": {
"guest_exec": true,
"guest_network": true,

View file

@ -1,4 +1,4 @@
"""Direct guest-network validation for a bundled runtime profile."""
"""Direct guest-network validation for a curated environment."""
from __future__ import annotations
@ -28,7 +28,7 @@ class NetworkCheckResult:
def run_network_check(
*,
profile: str = "debian-git",
environment: str = "debian:12",
vcpu_count: int = 1,
mem_mib: int = 1024,
ttl_seconds: int = 600,
@ -37,7 +37,7 @@ def run_network_check(
) -> NetworkCheckResult: # pragma: no cover - integration helper
pyro = Pyro(base_dir=base_dir)
result = pyro.run_in_vm(
profile=profile,
environment=environment,
command=NETWORK_CHECK_COMMAND,
vcpu_count=vcpu_count,
mem_mib=mem_mib,
@ -58,7 +58,7 @@ def run_network_check(
def main() -> None: # pragma: no cover - CLI wiring
parser = argparse.ArgumentParser(description="Run a guest networking check.")
parser.add_argument("--profile", default="debian-git")
parser.add_argument("--environment", default="debian:12")
parser.add_argument("--vcpu-count", type=int, default=1)
parser.add_argument("--mem-mib", type=int, default=1024)
parser.add_argument("--ttl-seconds", type=int, default=600)
@ -66,7 +66,7 @@ def main() -> None: # pragma: no cover - CLI wiring
args = parser.parse_args()
result = run_network_check(
profile=args.profile,
environment=args.environment,
vcpu_count=args.vcpu_count,
mem_mib=args.mem_mib,
ttl_seconds=args.ttl_seconds,

View file

@ -0,0 +1,615 @@
"""Official environment catalog and local cache management."""
from __future__ import annotations
import json
import os
import shutil
import tarfile
import tempfile
import time
import urllib.error
import urllib.parse
import urllib.request
from dataclasses import dataclass
from pathlib import Path
from typing import Any
from pyro_mcp.runtime import DEFAULT_PLATFORM, RuntimePaths
DEFAULT_ENVIRONMENT_VERSION = "1.0.0"
DEFAULT_CATALOG_VERSION = "1.0.0"
OCI_MANIFEST_ACCEPT = ", ".join(
(
"application/vnd.oci.image.index.v1+json",
"application/vnd.oci.image.manifest.v1+json",
"application/vnd.docker.distribution.manifest.list.v2+json",
"application/vnd.docker.distribution.manifest.v2+json",
)
)
@dataclass(frozen=True)
class VmEnvironment:
"""Catalog entry describing a curated Linux environment."""
name: str
version: str
description: str
default_packages: tuple[str, ...]
distribution: str
distribution_version: str
source_profile: str
platform: str = DEFAULT_PLATFORM
source_url: str | None = None
oci_registry: str | None = None
oci_repository: str | None = None
oci_reference: str | None = None
source_digest: str | None = None
compatibility: str = ">=1.0.0,<2.0.0"
@dataclass(frozen=True)
class InstalledEnvironment:
"""Resolved environment artifact locations."""
name: str
version: str
install_dir: Path
kernel_image: Path
rootfs_image: Path
source: str
source_digest: str | None
installed: bool
CATALOG: dict[str, VmEnvironment] = {
"debian:12": VmEnvironment(
name="debian:12",
version=DEFAULT_ENVIRONMENT_VERSION,
description="Debian 12 environment with Git preinstalled for common agent workflows.",
default_packages=("bash", "coreutils", "git"),
distribution="debian",
distribution_version="12",
source_profile="debian-git",
oci_registry="ghcr.io",
oci_repository="thaloco/pyro-environments/debian-12",
oci_reference=DEFAULT_ENVIRONMENT_VERSION,
),
"debian:12-base": VmEnvironment(
name="debian:12-base",
version=DEFAULT_ENVIRONMENT_VERSION,
description="Minimal Debian 12 environment for shell and core Unix tooling.",
default_packages=("bash", "coreutils"),
distribution="debian",
distribution_version="12",
source_profile="debian-base",
oci_registry="ghcr.io",
oci_repository="thaloco/pyro-environments/debian-12-base",
oci_reference=DEFAULT_ENVIRONMENT_VERSION,
),
"debian:12-build": VmEnvironment(
name="debian:12-build",
version=DEFAULT_ENVIRONMENT_VERSION,
description="Debian 12 environment with Git and common build tools preinstalled.",
default_packages=("bash", "coreutils", "git", "gcc", "make", "cmake", "python3"),
distribution="debian",
distribution_version="12",
source_profile="debian-build",
oci_registry="ghcr.io",
oci_repository="thaloco/pyro-environments/debian-12-build",
oci_reference=DEFAULT_ENVIRONMENT_VERSION,
),
}
def _default_cache_dir() -> Path:
return Path(
os.environ.get(
"PYRO_ENVIRONMENT_CACHE_DIR",
str(Path.home() / ".cache" / "pyro-mcp" / "environments"),
)
)
def _manifest_profile_digest(runtime_paths: RuntimePaths, profile_name: str) -> str | None:
profiles = runtime_paths.manifest.get("profiles")
if not isinstance(profiles, dict):
return None
profile = profiles.get(profile_name)
if not isinstance(profile, dict):
return None
rootfs = profile.get("rootfs")
if not isinstance(rootfs, dict):
return None
raw_digest = rootfs.get("sha256")
return raw_digest if isinstance(raw_digest, str) else None
def get_environment(name: str, *, runtime_paths: RuntimePaths | None = None) -> VmEnvironment:
"""Resolve a curated environment by name."""
try:
spec = CATALOG[name]
except KeyError as exc:
known = ", ".join(sorted(CATALOG))
raise ValueError(f"unknown environment {name!r}; expected one of: {known}") from exc
if runtime_paths is None:
return spec
return VmEnvironment(
name=spec.name,
version=spec.version,
description=spec.description,
default_packages=spec.default_packages,
distribution=spec.distribution,
distribution_version=spec.distribution_version,
source_profile=spec.source_profile,
platform=spec.platform,
source_url=spec.source_url,
oci_registry=spec.oci_registry,
oci_repository=spec.oci_repository,
oci_reference=spec.oci_reference,
source_digest=_manifest_profile_digest(runtime_paths, spec.source_profile),
compatibility=spec.compatibility,
)
def list_environments(*, runtime_paths: RuntimePaths | None = None) -> list[dict[str, object]]:
"""Return catalog metadata in a JSON-safe format."""
return [
_serialize_environment(get_environment(name, runtime_paths=runtime_paths))
for name in sorted(CATALOG)
]
def _serialize_environment(environment: VmEnvironment) -> dict[str, object]:
return {
"name": environment.name,
"version": environment.version,
"description": environment.description,
"default_packages": list(environment.default_packages),
"distribution": environment.distribution,
"distribution_version": environment.distribution_version,
"platform": environment.platform,
"oci_registry": environment.oci_registry,
"oci_repository": environment.oci_repository,
"oci_reference": environment.oci_reference,
"source_digest": environment.source_digest,
"compatibility": environment.compatibility,
}
class EnvironmentStore:
"""Install and inspect curated environments in a local cache."""
def __init__(
self,
*,
runtime_paths: RuntimePaths,
cache_dir: Path | None = None,
) -> None:
self._runtime_paths = runtime_paths
self._cache_dir = cache_dir or _default_cache_dir()
raw_platform = self._runtime_paths.manifest.get("platform", DEFAULT_PLATFORM)
platform = raw_platform if isinstance(raw_platform, str) else DEFAULT_PLATFORM
self._platform_dir = self._cache_dir / platform
@property
def cache_dir(self) -> Path:
return self._cache_dir
@property
def catalog_version(self) -> str:
return DEFAULT_CATALOG_VERSION
def list_environments(self) -> list[dict[str, object]]:
environments: list[dict[str, object]] = []
for name in sorted(CATALOG):
environments.append(self.inspect_environment(name))
return environments
def pull_environment(self, name: str) -> dict[str, object]:
installed = self.ensure_installed(name)
return {
**self.inspect_environment(name),
"install_dir": str(installed.install_dir),
"kernel_image": str(installed.kernel_image),
"rootfs_image": str(installed.rootfs_image),
"source": installed.source,
}
def inspect_environment(self, name: str) -> dict[str, object]:
spec = get_environment(name, runtime_paths=self._runtime_paths)
install_dir = self._install_dir(spec)
metadata_path = install_dir / "environment.json"
installed = metadata_path.exists() and (install_dir / "vmlinux").exists()
payload = _serialize_environment(spec)
payload.update(
{
"catalog_version": self.catalog_version,
"installed": installed,
"cache_dir": str(self._cache_dir),
"install_dir": str(install_dir),
}
)
if installed:
payload["install_manifest"] = str(metadata_path)
return payload
def ensure_installed(self, name: str) -> InstalledEnvironment:
spec = get_environment(name, runtime_paths=self._runtime_paths)
self._platform_dir.mkdir(parents=True, exist_ok=True)
install_dir = self._install_dir(spec)
metadata_path = install_dir / "environment.json"
if metadata_path.exists():
kernel_image = install_dir / "vmlinux"
rootfs_image = install_dir / "rootfs.ext4"
if kernel_image.exists() and rootfs_image.exists():
metadata = json.loads(metadata_path.read_text(encoding="utf-8"))
source = str(metadata.get("source", "cache"))
raw_digest = metadata.get("source_digest")
digest = raw_digest if isinstance(raw_digest, str) else None
return InstalledEnvironment(
name=spec.name,
version=spec.version,
install_dir=install_dir,
kernel_image=kernel_image,
rootfs_image=rootfs_image,
source=source,
source_digest=digest,
installed=True,
)
source_dir = self._runtime_paths.artifacts_dir / spec.source_profile
if source_dir.exists():
return self._install_from_local_source(spec, source_dir)
if (
spec.oci_registry is not None
and spec.oci_repository is not None
and spec.oci_reference is not None
):
return self._install_from_oci(spec)
if spec.source_url is not None:
return self._install_from_archive(spec, spec.source_url)
raise RuntimeError(
f"environment {spec.name!r} is not installed and no downloadable source is configured"
)
def prune_environments(self) -> dict[str, object]:
deleted: list[str] = []
if not self._platform_dir.exists():
return {"deleted_environment_dirs": [], "count": 0}
for child in self._platform_dir.iterdir():
if child.name.startswith(".partial-"):
shutil.rmtree(child, ignore_errors=True)
deleted.append(child.name)
continue
if not child.is_dir():
continue
marker = child / "environment.json"
if not marker.exists():
shutil.rmtree(child, ignore_errors=True)
deleted.append(child.name)
continue
metadata = json.loads(marker.read_text(encoding="utf-8"))
raw_name = metadata.get("name")
raw_version = metadata.get("version")
if not isinstance(raw_name, str) or not isinstance(raw_version, str):
shutil.rmtree(child, ignore_errors=True)
deleted.append(child.name)
continue
try:
spec = get_environment(raw_name, runtime_paths=self._runtime_paths)
except ValueError:
shutil.rmtree(child, ignore_errors=True)
deleted.append(child.name)
continue
if spec.version != raw_version:
shutil.rmtree(child, ignore_errors=True)
deleted.append(child.name)
return {"deleted_environment_dirs": sorted(deleted), "count": len(deleted)}
def _install_dir(self, spec: VmEnvironment) -> Path:
normalized = spec.name.replace(":", "_")
return self._platform_dir / f"{normalized}-{spec.version}"
def _install_from_local_source(
self, spec: VmEnvironment, source_dir: Path
) -> InstalledEnvironment:
install_dir = self._install_dir(spec)
temp_dir = Path(tempfile.mkdtemp(prefix=".partial-", dir=self._platform_dir))
try:
self._link_or_copy(source_dir / "vmlinux", temp_dir / "vmlinux")
self._link_or_copy(source_dir / "rootfs.ext4", temp_dir / "rootfs.ext4")
self._write_install_manifest(
temp_dir,
spec=spec,
source="bundled-runtime-source",
source_digest=spec.source_digest,
)
shutil.rmtree(install_dir, ignore_errors=True)
temp_dir.replace(install_dir)
except Exception:
shutil.rmtree(temp_dir, ignore_errors=True)
raise
return InstalledEnvironment(
name=spec.name,
version=spec.version,
install_dir=install_dir,
kernel_image=install_dir / "vmlinux",
rootfs_image=install_dir / "rootfs.ext4",
source="bundled-runtime-source",
source_digest=spec.source_digest,
installed=True,
)
def _install_from_archive(self, spec: VmEnvironment, archive_url: str) -> InstalledEnvironment:
install_dir = self._install_dir(spec)
temp_dir = Path(tempfile.mkdtemp(prefix=".partial-", dir=self._platform_dir))
archive_path = temp_dir / "environment.tgz"
try:
urllib.request.urlretrieve(archive_url, archive_path) # noqa: S310
self._extract_archive(archive_path, temp_dir)
kernel_image = self._locate_artifact(temp_dir, "vmlinux")
rootfs_image = self._locate_artifact(temp_dir, "rootfs.ext4")
if kernel_image.parent != temp_dir:
shutil.move(str(kernel_image), temp_dir / "vmlinux")
if rootfs_image.parent != temp_dir:
shutil.move(str(rootfs_image), temp_dir / "rootfs.ext4")
self._write_install_manifest(
temp_dir,
spec=spec,
source=archive_url,
source_digest=spec.source_digest,
)
archive_path.unlink(missing_ok=True)
shutil.rmtree(install_dir, ignore_errors=True)
temp_dir.replace(install_dir)
except Exception:
shutil.rmtree(temp_dir, ignore_errors=True)
raise
return InstalledEnvironment(
name=spec.name,
version=spec.version,
install_dir=install_dir,
kernel_image=install_dir / "vmlinux",
rootfs_image=install_dir / "rootfs.ext4",
source=archive_url,
source_digest=spec.source_digest,
installed=True,
)
def _install_from_oci(self, spec: VmEnvironment) -> InstalledEnvironment:
install_dir = self._install_dir(spec)
temp_dir = Path(tempfile.mkdtemp(prefix=".partial-", dir=self._platform_dir))
try:
manifest, resolved_digest = self._fetch_oci_manifest(spec)
layers = manifest.get("layers")
if not isinstance(layers, list) or not layers:
raise RuntimeError("OCI manifest did not contain any layers")
for index, layer in enumerate(layers):
if not isinstance(layer, dict):
raise RuntimeError("OCI manifest layer entry is malformed")
raw_digest = layer.get("digest")
if not isinstance(raw_digest, str):
raise RuntimeError("OCI manifest layer is missing a digest")
blob_path = temp_dir / f"layer-{index}.tar"
self._download_oci_blob(spec, raw_digest, blob_path)
self._extract_tar_archive(blob_path, temp_dir)
blob_path.unlink(missing_ok=True)
kernel_image = self._locate_artifact(temp_dir, "vmlinux")
rootfs_image = self._locate_artifact(temp_dir, "rootfs.ext4")
if kernel_image.parent != temp_dir:
shutil.move(str(kernel_image), temp_dir / "vmlinux")
if rootfs_image.parent != temp_dir:
shutil.move(str(rootfs_image), temp_dir / "rootfs.ext4")
source = (
f"oci://{spec.oci_registry}/{spec.oci_repository}:{spec.oci_reference}"
if spec.oci_registry is not None
and spec.oci_repository is not None
and spec.oci_reference is not None
else "oci://unknown"
)
self._write_install_manifest(
temp_dir,
spec=spec,
source=source,
source_digest=resolved_digest or spec.source_digest,
)
shutil.rmtree(install_dir, ignore_errors=True)
temp_dir.replace(install_dir)
except Exception:
shutil.rmtree(temp_dir, ignore_errors=True)
raise
return InstalledEnvironment(
name=spec.name,
version=spec.version,
install_dir=install_dir,
kernel_image=install_dir / "vmlinux",
rootfs_image=install_dir / "rootfs.ext4",
source=source,
source_digest=resolved_digest or spec.source_digest,
installed=True,
)
def _write_install_manifest(
self,
install_dir: Path,
*,
spec: VmEnvironment,
source: str,
source_digest: str | None,
) -> None:
payload = {
"catalog_version": self.catalog_version,
"name": spec.name,
"version": spec.version,
"source": source,
"source_digest": source_digest,
"installed_at": int(time.time()),
}
(install_dir / "environment.json").write_text(
json.dumps(payload, indent=2, sort_keys=True) + "\n",
encoding="utf-8",
)
def _extract_archive(self, archive_path: Path, dest_dir: Path) -> None:
self._extract_tar_archive(archive_path, dest_dir)
def _locate_artifact(self, root: Path, name: str) -> Path:
for candidate in root.rglob(name):
if candidate.is_file():
return candidate
raise RuntimeError(f"environment archive did not contain {name}")
def _link_or_copy(self, source: Path, dest: Path) -> None:
dest.parent.mkdir(parents=True, exist_ok=True)
relative_target = os.path.relpath(source, start=dest.parent)
try:
dest.symlink_to(relative_target)
except OSError:
shutil.copy2(source, dest)
def _fetch_oci_manifest(
self, spec: VmEnvironment
) -> tuple[dict[str, Any], str | None]:
if spec.oci_registry is None or spec.oci_repository is None or spec.oci_reference is None:
raise RuntimeError("OCI source metadata is incomplete")
headers = {"Accept": OCI_MANIFEST_ACCEPT}
payload, response_headers = self._request_bytes(
self._oci_url(
spec.oci_registry,
spec.oci_repository,
f"manifests/{spec.oci_reference}",
),
headers=headers,
repository=spec.oci_repository,
)
manifest = json.loads(payload.decode("utf-8"))
if not isinstance(manifest, dict):
raise RuntimeError("OCI manifest response was not a JSON object")
resolved_digest = response_headers.get("Docker-Content-Digest")
media_type = manifest.get("mediaType")
if media_type in {
"application/vnd.oci.image.index.v1+json",
"application/vnd.docker.distribution.manifest.list.v2+json",
}:
manifests = manifest.get("manifests")
if not isinstance(manifests, list):
raise RuntimeError("OCI index did not contain manifests")
selected = self._select_oci_manifest_descriptor(manifests)
payload, response_headers = self._request_bytes(
self._oci_url(
spec.oci_registry,
spec.oci_repository,
f"manifests/{selected}",
),
headers=headers,
repository=spec.oci_repository,
)
manifest = json.loads(payload.decode("utf-8"))
if not isinstance(manifest, dict):
raise RuntimeError("OCI child manifest response was not a JSON object")
resolved_digest = response_headers.get("Docker-Content-Digest") or selected
return manifest, resolved_digest
def _download_oci_blob(self, spec: VmEnvironment, digest: str, dest: Path) -> None:
if spec.oci_registry is None or spec.oci_repository is None:
raise RuntimeError("OCI source metadata is incomplete")
payload, _ = self._request_bytes(
self._oci_url(
spec.oci_registry,
spec.oci_repository,
f"blobs/{digest}",
),
headers={},
repository=spec.oci_repository,
)
dest.write_bytes(payload)
def _request_bytes(
self,
url: str,
*,
headers: dict[str, str],
repository: str,
) -> tuple[bytes, dict[str, str]]:
request = urllib.request.Request(url, headers=headers, method="GET")
try:
with urllib.request.urlopen(request, timeout=90) as response: # noqa: S310
return response.read(), dict(response.headers.items())
except urllib.error.HTTPError as exc:
if exc.code != 401:
raise RuntimeError(f"failed to fetch OCI resource {url}: {exc}") from exc
authenticate = exc.headers.get("WWW-Authenticate")
if authenticate is None:
raise RuntimeError("OCI registry denied access without an auth challenge") from exc
token = self._fetch_registry_token(authenticate, repository)
authenticated_request = urllib.request.Request(
url,
headers={**headers, "Authorization": f"Bearer {token}"},
method="GET",
)
with urllib.request.urlopen(authenticated_request, timeout=90) as response: # noqa: S310
return response.read(), dict(response.headers.items())
def _fetch_registry_token(self, authenticate: str, repository: str) -> str:
if not authenticate.startswith("Bearer "):
raise RuntimeError("unsupported OCI authentication scheme")
params = self._parse_authenticate_parameters(authenticate[len("Bearer ") :])
realm = params.get("realm")
if realm is None:
raise RuntimeError("OCI auth challenge did not include a token realm")
query = {
"service": params.get("service", ""),
"scope": params.get("scope", f"repository:{repository}:pull"),
}
token_url = f"{realm}?{urllib.parse.urlencode(query)}"
with urllib.request.urlopen(token_url, timeout=90) as response: # noqa: S310
payload = json.loads(response.read().decode("utf-8"))
if not isinstance(payload, dict):
raise RuntimeError("OCI auth token response was not a JSON object")
raw_token = payload.get("token") or payload.get("access_token")
if not isinstance(raw_token, str) or raw_token == "":
raise RuntimeError("OCI auth token response did not include a bearer token")
return raw_token
def _parse_authenticate_parameters(self, raw: str) -> dict[str, str]:
params: dict[str, str] = {}
for segment in raw.split(","):
if "=" not in segment:
continue
key, value = segment.split("=", 1)
params[key.strip()] = value.strip().strip('"')
return params
def _select_oci_manifest_descriptor(self, manifests: list[Any]) -> str:
for manifest in manifests:
if not isinstance(manifest, dict):
continue
platform = manifest.get("platform")
if not isinstance(platform, dict):
continue
os_name = platform.get("os")
architecture = platform.get("architecture")
raw_digest = manifest.get("digest")
if (
isinstance(os_name, str)
and isinstance(architecture, str)
and isinstance(raw_digest, str)
and os_name == "linux"
and architecture in {"amd64", "x86_64"}
):
return raw_digest
raise RuntimeError("OCI index did not contain a linux/amd64 manifest")
def _extract_tar_archive(self, archive_path: Path, dest_dir: Path) -> None:
dest_root = dest_dir.resolve()
with tarfile.open(archive_path, "r:*") as archive:
for member in archive.getmembers():
member_path = (dest_dir / member.name).resolve()
if not member_path.is_relative_to(dest_root):
raise RuntimeError(f"unsafe archive member path: {member.name}")
archive.extractall(dest_dir, filter="data")
def _oci_url(self, registry: str, repository: str, suffix: str) -> str:
return f"https://{registry}/v2/{repository}/{suffix}"

View file

@ -19,10 +19,10 @@ from pyro_mcp.runtime import (
resolve_runtime_paths,
runtime_capabilities,
)
from pyro_mcp.vm_environments import EnvironmentStore, get_environment
from pyro_mcp.vm_firecracker import build_launch_plan
from pyro_mcp.vm_guest import VsockExecClient
from pyro_mcp.vm_network import NetworkConfig, TapNetworkManager
from pyro_mcp.vm_profiles import get_profile, list_profiles, resolve_artifacts
VmState = Literal["created", "started", "stopped"]
@ -32,7 +32,7 @@ class VmInstance:
"""In-memory VM lifecycle record."""
vm_id: str
profile: str
environment: str
vcpu_count: int
mem_mib: int
ttl_seconds: int
@ -85,6 +85,23 @@ def _run_host_command(workdir: Path, command: str, timeout_seconds: int) -> VmEx
)
def _copy_rootfs(source: Path, dest: Path) -> str:
dest.parent.mkdir(parents=True, exist_ok=True)
try:
proc = subprocess.run( # noqa: S603
["cp", "--reflink=auto", str(source), str(dest)],
text=True,
capture_output=True,
check=False,
)
if proc.returncode == 0:
return "reflink_or_copy"
except OSError:
pass
shutil.copy2(source, dest)
return "copy2"
class VmBackend:
"""Backend interface for lifecycle operations."""
@ -132,14 +149,14 @@ class FirecrackerBackend(VmBackend): # pragma: no cover
def __init__(
self,
artifacts_dir: Path,
environment_store: EnvironmentStore,
firecracker_bin: Path,
jailer_bin: Path,
runtime_capabilities: RuntimeCapabilities,
network_manager: TapNetworkManager | None = None,
guest_exec_client: VsockExecClient | None = None,
) -> None:
self._artifacts_dir = artifacts_dir
self._environment_store = environment_store
self._firecracker_bin = firecracker_bin
self._jailer_bin = jailer_bin
self._runtime_capabilities = runtime_capabilities
@ -156,15 +173,26 @@ class FirecrackerBackend(VmBackend): # pragma: no cover
def create(self, instance: VmInstance) -> None:
instance.workdir.mkdir(parents=True, exist_ok=False)
try:
artifacts = resolve_artifacts(self._artifacts_dir, instance.profile)
if not artifacts.kernel_image.exists() or not artifacts.rootfs_image.exists():
installed_environment = self._environment_store.ensure_installed(instance.environment)
if (
not installed_environment.kernel_image.exists()
or not installed_environment.rootfs_image.exists()
):
raise RuntimeError(
f"missing profile artifacts for {instance.profile}; expected "
f"{artifacts.kernel_image} and {artifacts.rootfs_image}"
f"missing environment artifacts for {instance.environment}; expected "
f"{installed_environment.kernel_image} and {installed_environment.rootfs_image}"
)
instance.metadata["kernel_image"] = str(artifacts.kernel_image)
instance.metadata["environment_version"] = installed_environment.version
instance.metadata["environment_source"] = installed_environment.source
if installed_environment.source_digest is not None:
instance.metadata["environment_digest"] = installed_environment.source_digest
instance.metadata["environment_install_dir"] = str(installed_environment.install_dir)
instance.metadata["kernel_image"] = str(installed_environment.kernel_image)
rootfs_copy = instance.workdir / "rootfs.ext4"
shutil.copy2(artifacts.rootfs_image, rootfs_copy)
instance.metadata["rootfs_clone_mode"] = _copy_rootfs(
installed_environment.rootfs_image,
rootfs_copy,
)
instance.metadata["rootfs_image"] = str(rootfs_copy)
if instance.network_requested:
network = self._network_manager.allocate(instance.vm_id)
@ -320,28 +348,35 @@ class VmManager:
*,
backend_name: str | None = None,
base_dir: Path | None = None,
artifacts_dir: Path | None = None,
cache_dir: Path | None = None,
max_active_vms: int = 4,
runtime_paths: RuntimePaths | None = None,
network_manager: TapNetworkManager | None = None,
) -> None:
self._backend_name = backend_name or "firecracker"
self._base_dir = base_dir or Path("/tmp/pyro-mcp")
resolved_cache_dir = cache_dir or self._base_dir / ".environment-cache"
self._runtime_paths = runtime_paths
if self._backend_name == "firecracker":
self._runtime_paths = self._runtime_paths or resolve_runtime_paths()
self._artifacts_dir = artifacts_dir or self._runtime_paths.artifacts_dir
self._runtime_capabilities = runtime_capabilities(self._runtime_paths)
else:
self._artifacts_dir = artifacts_dir or Path(
os.environ.get("PYRO_VM_ARTIFACTS_DIR", "/opt/pyro-mcp/artifacts")
self._environment_store = EnvironmentStore(
runtime_paths=self._runtime_paths,
cache_dir=resolved_cache_dir,
)
else:
self._runtime_capabilities = RuntimeCapabilities(
supports_vm_boot=False,
supports_guest_exec=False,
supports_guest_network=False,
reason="mock backend does not boot a guest",
)
if self._runtime_paths is None:
self._runtime_paths = resolve_runtime_paths(verify_checksums=False)
self._environment_store = EnvironmentStore(
runtime_paths=self._runtime_paths,
cache_dir=resolved_cache_dir,
)
self._max_active_vms = max_active_vms
if network_manager is not None:
self._network_manager = network_manager
@ -361,7 +396,7 @@ class VmManager:
if self._runtime_paths is None:
raise RuntimeError("runtime paths were not initialized for firecracker backend")
return FirecrackerBackend(
self._artifacts_dir,
self._environment_store,
firecracker_bin=self._runtime_paths.firecracker_bin,
jailer_bin=self._runtime_paths.jailer_bin,
runtime_capabilities=self._runtime_capabilities,
@ -369,20 +404,29 @@ class VmManager:
)
raise ValueError("invalid backend; expected one of: mock, firecracker")
def list_profiles(self) -> list[dict[str, object]]:
return list_profiles()
def list_environments(self) -> list[dict[str, object]]:
return self._environment_store.list_environments()
def pull_environment(self, environment: str) -> dict[str, object]:
return self._environment_store.pull_environment(environment)
def inspect_environment(self, environment: str) -> dict[str, object]:
return self._environment_store.inspect_environment(environment)
def prune_environments(self) -> dict[str, object]:
return self._environment_store.prune_environments()
def create_vm(
self,
*,
profile: str,
environment: str,
vcpu_count: int,
mem_mib: int,
ttl_seconds: int,
network: bool = False,
) -> dict[str, Any]:
self._validate_limits(vcpu_count=vcpu_count, mem_mib=mem_mib, ttl_seconds=ttl_seconds)
get_profile(profile)
get_environment(environment, runtime_paths=self._runtime_paths)
now = time.time()
with self._lock:
self._reap_expired_locked(now)
@ -394,7 +438,7 @@ class VmManager:
vm_id = uuid.uuid4().hex[:12]
instance = VmInstance(
vm_id=vm_id,
profile=profile,
environment=environment,
vcpu_count=vcpu_count,
mem_mib=mem_mib,
ttl_seconds=ttl_seconds,
@ -410,7 +454,7 @@ class VmManager:
def run_vm(
self,
*,
profile: str,
environment: str,
command: str,
vcpu_count: int,
mem_mib: int,
@ -419,7 +463,7 @@ class VmManager:
network: bool = False,
) -> dict[str, Any]:
created = self.create_vm(
profile=profile,
environment=environment,
vcpu_count=vcpu_count,
mem_mib=mem_mib,
ttl_seconds=ttl_seconds,
@ -459,6 +503,8 @@ class VmManager:
cleanup = self.delete_vm(vm_id, reason="post_exec_cleanup")
return {
"vm_id": vm_id,
"environment": instance.environment,
"environment_version": instance.metadata.get("environment_version"),
"command": command,
"stdout": exec_result.stdout,
"stderr": exec_result.stderr,
@ -532,7 +578,8 @@ class VmManager:
def _serialize(self, instance: VmInstance) -> dict[str, Any]:
return {
"vm_id": instance.vm_id,
"profile": instance.profile,
"environment": instance.environment,
"environment_version": instance.metadata.get("environment_version"),
"vcpu_count": instance.vcpu_count,
"mem_mib": instance.mem_mib,
"ttl_seconds": instance.ttl_seconds,

View file

@ -1,72 +0,0 @@
"""Standard VM environment profiles for ephemeral coding environments."""
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
@dataclass(frozen=True)
class VmProfile:
"""Profile metadata describing guest OS/tooling flavor."""
name: str
description: str
default_packages: tuple[str, ...]
@dataclass(frozen=True)
class VmArtifacts:
"""Resolved artifact paths for a profile."""
kernel_image: Path
rootfs_image: Path
PROFILE_CATALOG: dict[str, VmProfile] = {
"debian-base": VmProfile(
name="debian-base",
description="Minimal Debian userspace for shell and core Unix tooling.",
default_packages=("bash", "coreutils"),
),
"debian-git": VmProfile(
name="debian-git",
description="Debian base environment with Git preinstalled.",
default_packages=("bash", "coreutils", "git"),
),
"debian-build": VmProfile(
name="debian-build",
description="Debian Git environment with common build tools for source builds.",
default_packages=("bash", "coreutils", "git", "gcc", "make", "cmake", "python3"),
),
}
def list_profiles() -> list[dict[str, object]]:
"""Return profile metadata in a JSON-safe format."""
return [
{
"name": profile.name,
"description": profile.description,
"default_packages": list(profile.default_packages),
}
for profile in PROFILE_CATALOG.values()
]
def get_profile(name: str) -> VmProfile:
"""Resolve a profile by name."""
try:
return PROFILE_CATALOG[name]
except KeyError as exc:
known = ", ".join(sorted(PROFILE_CATALOG))
raise ValueError(f"unknown profile {name!r}; expected one of: {known}") from exc
def resolve_artifacts(artifacts_dir: Path, profile_name: str) -> VmArtifacts:
"""Resolve kernel/rootfs file locations for a profile."""
profile_dir = artifacts_dir / profile_name
return VmArtifacts(
kernel_image=profile_dir / "vmlinux",
rootfs_image=profile_dir / "rootfs.ext4",
)

View file

@ -18,7 +18,7 @@ def test_pyro_run_in_vm_delegates_to_manager(tmp_path: Path) -> None:
)
)
result = pyro.run_in_vm(
profile="debian-base",
environment="debian:12-base",
command="printf 'ok\\n'",
vcpu_count=1,
mem_mib=512,
@ -72,7 +72,7 @@ def test_pyro_vm_run_tool_executes(tmp_path: Path) -> None:
await server.call_tool(
"vm_run",
{
"profile": "debian-base",
"environment": "debian:12-base",
"command": "printf 'ok\\n'",
"vcpu_count": 1,
"mem_mib": 512,

View file

@ -23,7 +23,7 @@ def test_cli_run_prints_json(
def parse_args(self) -> argparse.Namespace:
return argparse.Namespace(
command="run",
profile="debian-git",
environment="debian:12",
vcpu_count=1,
mem_mib=512,
timeout_seconds=30,
@ -84,6 +84,24 @@ def test_cli_demo_ollama_prints_summary(
assert "[summary] exit_code=0 fallback_used=False execution_mode=guest_vsock" in output
def test_cli_env_list_prints_json(
monkeypatch: pytest.MonkeyPatch, capsys: pytest.CaptureFixture[str]
) -> None:
class StubPyro:
def list_environments(self) -> list[dict[str, object]]:
return [{"name": "debian:12", "installed": False}]
class StubParser:
def parse_args(self) -> argparse.Namespace:
return argparse.Namespace(command="env", env_command="list")
monkeypatch.setattr(cli, "_build_parser", lambda: StubParser())
monkeypatch.setattr(cli, "Pyro", StubPyro)
cli.main()
output = json.loads(capsys.readouterr().out)
assert output["environments"][0]["name"] == "debian:12"
def test_cli_requires_run_command() -> None:
with pytest.raises(ValueError, match="command is required"):
cli._require_command([])

View file

@ -18,7 +18,7 @@ def test_run_demo_happy_path(monkeypatch: pytest.MonkeyPatch) -> None:
def run_in_vm(
self,
*,
profile: str,
environment: str,
command: str,
vcpu_count: int,
mem_mib: int,
@ -30,7 +30,7 @@ def test_run_demo_happy_path(monkeypatch: pytest.MonkeyPatch) -> None:
(
"run_in_vm",
{
"profile": profile,
"environment": environment,
"command": command,
"vcpu_count": vcpu_count,
"mem_mib": mem_mib,
@ -50,7 +50,7 @@ def test_run_demo_happy_path(monkeypatch: pytest.MonkeyPatch) -> None:
(
"run_in_vm",
{
"profile": "debian-git",
"environment": "debian:12",
"command": "git --version",
"vcpu_count": 1,
"mem_mib": 512,

View file

@ -35,7 +35,7 @@ def test_langchain_example_delegates_to_pyro(monkeypatch: pytest.MonkeyPatch) ->
)(),
)
result = module.run_vm_run_tool(
profile="debian-git",
environment="debian:12",
command="git --version",
vcpu_count=1,
mem_mib=1024,

View file

@ -31,7 +31,7 @@ def _stepwise_model_response(payload: dict[str, Any], step: int) -> dict[str, An
"message": {
"role": "assistant",
"content": "",
"tool_calls": [{"id": "1", "function": {"name": "vm_list_profiles"}}],
"tool_calls": [{"id": "1", "function": {"name": "vm_list_environments"}}],
}
}
]
@ -50,7 +50,7 @@ def _stepwise_model_response(payload: dict[str, Any], step: int) -> dict[str, An
"name": "vm_run",
"arguments": json.dumps(
{
"profile": "debian-git",
"environment": "debian:12",
"command": "printf 'true\\n'",
"vcpu_count": 1,
"mem_mib": 512,
@ -117,7 +117,7 @@ def test_run_ollama_tool_demo_recovers_from_bad_vm_id(
"name": "vm_exec",
"arguments": json.dumps(
{
"vm_id": "vm_list_profiles",
"vm_id": "vm_list_environments",
"command": ollama_demo.NETWORK_PROOF_COMMAND,
}
),
@ -157,15 +157,15 @@ def test_run_ollama_tool_demo_resolves_vm_id_placeholder(
"role": "assistant",
"content": "",
"tool_calls": [
{"id": "1", "function": {"name": "vm_list_profiles"}},
{"id": "2", "function": {"name": "vm_list_profiles"}},
{"id": "1", "function": {"name": "vm_list_environments"}},
{"id": "2", "function": {"name": "vm_list_environments"}},
{
"id": "3",
"function": {
"name": "vm_create",
"arguments": json.dumps(
{
"profile": "debian-git",
"environment": "debian:12",
"vcpu_count": "2",
"mem_mib": "2048",
}
@ -217,7 +217,12 @@ def test_run_ollama_tool_demo_resolves_vm_id_placeholder(
def test_dispatch_tool_call_vm_exec_autostarts_created_vm(tmp_path: Path) -> None:
pyro = RealPyro(manager=RealVmManager(backend_name="mock", base_dir=tmp_path / "vms"))
created = pyro.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=60)
created = pyro.create_vm(
environment="debian:12-base",
vcpu_count=1,
mem_mib=512,
ttl_seconds=60,
)
vm_id = str(created["vm_id"])
executed = ollama_demo._dispatch_tool_call(
@ -291,7 +296,7 @@ def test_run_ollama_tool_demo_verbose_logs_values(monkeypatch: pytest.MonkeyPatc
assert result["fallback_used"] is False
assert str(result["exec_result"]["stdout"]).strip() == "true"
assert any("[model] input user:" in line for line in logs)
assert any("[model] tool_call vm_list_profiles args={}" in line for line in logs)
assert any("[model] tool_call vm_list_environments args={}" in line for line in logs)
assert any("[tool] result vm_run " in line for line in logs)
@ -299,7 +304,7 @@ def test_run_ollama_tool_demo_verbose_logs_values(monkeypatch: pytest.MonkeyPatc
("tool_call", "error"),
[
(1, "invalid tool call entry"),
({"id": "", "function": {"name": "vm_list_profiles"}}, "valid call id"),
({"id": "", "function": {"name": "vm_list_environments"}}, "valid call id"),
({"id": "1"}, "function metadata"),
({"id": "1", "function": {"name": 3}}, "name is invalid"),
],
@ -326,7 +331,7 @@ def test_run_ollama_tool_demo_max_rounds(monkeypatch: pytest.MonkeyPatch) -> Non
{
"message": {
"role": "assistant",
"tool_calls": [{"id": "1", "function": {"name": "vm_list_profiles"}}],
"tool_calls": [{"id": "1", "function": {"name": "vm_list_environments"}}],
}
}
]
@ -384,13 +389,13 @@ def test_run_ollama_tool_demo_exec_result_validation(
def test_dispatch_tool_call_coverage(tmp_path: Path) -> None:
pyro = RealPyro(manager=RealVmManager(backend_name="mock", base_dir=tmp_path / "vms"))
profiles = ollama_demo._dispatch_tool_call(pyro, "vm_list_profiles", {})
assert "profiles" in profiles
environments = ollama_demo._dispatch_tool_call(pyro, "vm_list_environments", {})
assert "environments" in environments
created = ollama_demo._dispatch_tool_call(
pyro,
"vm_create",
{
"profile": "debian-base",
"environment": "debian:12-base",
"vcpu_count": "1",
"mem_mib": "512",
"ttl_seconds": "60",
@ -412,7 +417,7 @@ def test_dispatch_tool_call_coverage(tmp_path: Path) -> None:
pyro,
"vm_run",
{
"profile": "debian-base",
"environment": "debian:12-base",
"command": "printf 'true\\n'",
"vcpu_count": "1",
"mem_mib": "512",

View file

@ -33,7 +33,7 @@ def test_openai_example_runs_function_call_loop(monkeypatch: pytest.MonkeyPatch)
name="vm_run",
call_id="call_123",
arguments=(
'{"profile":"debian-git","command":"git --version",'
'{"environment":"debian:12","command":"git --version",'
'"vcpu_count":1,"mem_mib":1024}'
),
)

View file

@ -15,6 +15,7 @@ from pyro_mcp.cli import _build_parser
from pyro_mcp.contract import (
PUBLIC_CLI_COMMANDS,
PUBLIC_CLI_DEMO_SUBCOMMANDS,
PUBLIC_CLI_ENV_SUBCOMMANDS,
PUBLIC_CLI_RUN_FLAGS,
PUBLIC_MCP_TOOLS,
PUBLIC_SDK_METHODS,
@ -49,14 +50,19 @@ def test_public_cli_help_lists_commands_and_run_flags() -> None:
run_parser = _build_parser()
run_help = run_parser.parse_args(
["run", "--profile", "debian-base", "--vcpu-count", "1", "--mem-mib", "512", "--", "true"]
["run", "debian:12-base", "--vcpu-count", "1", "--mem-mib", "512", "--", "true"]
)
assert run_help.command == "run"
assert run_help.environment == "debian:12-base"
run_help_text = _subparser_choice(parser, "run").format_help()
for flag in PUBLIC_CLI_RUN_FLAGS:
assert flag in run_help_text
env_help_text = _subparser_choice(parser, "env").format_help()
for subcommand_name in PUBLIC_CLI_ENV_SUBCOMMANDS:
assert subcommand_name in env_help_text
demo_help_text = _subparser_choice(parser, "demo").format_help()
for subcommand_name in PUBLIC_CLI_DEMO_SUBCOMMANDS:
assert subcommand_name in demo_help_text

View file

@ -80,6 +80,7 @@ def test_doctor_report_has_runtime_fields() -> None:
assert "firecracker_bin" in runtime
assert "guest_agent_path" in runtime
assert "component_versions" in runtime
assert "environments" in runtime
networking = report["networking"]
assert isinstance(networking, dict)
assert "tun_available" in networking

View file

@ -27,7 +27,7 @@ def test_network_check_uses_network_enabled_manager(monkeypatch: pytest.MonkeyPa
result = runtime_network_check.run_network_check()
assert observed["run_kwargs"] == {
"profile": "debian-git",
"environment": "debian:12",
"command": runtime_network_check.NETWORK_CHECK_COMMAND,
"vcpu_count": 1,
"mem_mib": 1024,

View file

@ -27,7 +27,7 @@ def test_create_server_registers_vm_tools(tmp_path: Path) -> None:
tool_names = asyncio.run(_run())
assert "vm_create" in tool_names
assert "vm_exec" in tool_names
assert "vm_list_profiles" in tool_names
assert "vm_list_environments" in tool_names
assert "vm_network_info" in tool_names
assert "vm_run" in tool_names
assert "vm_status" in tool_names
@ -54,7 +54,7 @@ def test_vm_run_round_trip(tmp_path: Path) -> None:
await server.call_tool(
"vm_run",
{
"profile": "debian-git",
"environment": "debian:12",
"command": "printf 'git version 2.0\\n'",
"vcpu_count": 1,
"mem_mib": 512,
@ -95,19 +95,24 @@ def test_vm_tools_status_stop_delete_and_reap(tmp_path: Path) -> None:
dict[str, Any],
]:
server = create_server(manager=manager)
profiles_raw = await server.call_tool("vm_list_profiles", {})
if not isinstance(profiles_raw, tuple) or len(profiles_raw) != 2:
raise TypeError("unexpected profiles result")
_, profiles_structured = profiles_raw
if not isinstance(profiles_structured, dict):
raise TypeError("profiles tool should return a dictionary")
raw_profiles = profiles_structured.get("result")
if not isinstance(raw_profiles, list):
raise TypeError("profiles tool did not contain a result list")
environments_raw = await server.call_tool("vm_list_environments", {})
if not isinstance(environments_raw, tuple) or len(environments_raw) != 2:
raise TypeError("unexpected environments result")
_, environments_structured = environments_raw
if not isinstance(environments_structured, dict):
raise TypeError("environments tool should return a dictionary")
raw_environments = environments_structured.get("result")
if not isinstance(raw_environments, list):
raise TypeError("environments tool did not contain a result list")
created = _extract_structured(
await server.call_tool(
"vm_create",
{"profile": "debian-base", "vcpu_count": 1, "mem_mib": 512, "ttl_seconds": 600},
{
"environment": "debian:12-base",
"vcpu_count": 1,
"mem_mib": 512,
"ttl_seconds": 600,
},
)
)
vm_id = str(created["vm_id"])
@ -120,7 +125,12 @@ def test_vm_tools_status_stop_delete_and_reap(tmp_path: Path) -> None:
expiring = _extract_structured(
await server.call_tool(
"vm_create",
{"profile": "debian-base", "vcpu_count": 1, "mem_mib": 512, "ttl_seconds": 1},
{
"environment": "debian:12-base",
"vcpu_count": 1,
"mem_mib": 512,
"ttl_seconds": 1,
},
)
)
expiring_id = str(expiring["vm_id"])
@ -131,16 +141,16 @@ def test_vm_tools_status_stop_delete_and_reap(tmp_path: Path) -> None:
network,
stopped,
deleted,
cast(list[dict[str, object]], raw_profiles),
cast(list[dict[str, object]], raw_environments),
reaped,
)
status, network, stopped, deleted, profiles, reaped = asyncio.run(_run())
status, network, stopped, deleted, environments, reaped = asyncio.run(_run())
assert status["state"] == "started"
assert network["network_enabled"] is False
assert stopped["state"] == "stopped"
assert bool(deleted["deleted"]) is True
assert profiles[0]["name"] == "debian-base"
assert environments[0]["name"] == "debian:12"
assert int(reaped["count"]) == 1

View file

@ -0,0 +1,153 @@
from __future__ import annotations
import tarfile
from pathlib import Path
import pytest
from pyro_mcp.runtime import resolve_runtime_paths
from pyro_mcp.vm_environments import EnvironmentStore, get_environment, list_environments
def test_list_environments_includes_expected_entries() -> None:
environments = list_environments(runtime_paths=resolve_runtime_paths())
names = {str(entry["name"]) for entry in environments}
assert {"debian:12", "debian:12-base", "debian:12-build"} <= names
def test_get_environment_rejects_unknown() -> None:
with pytest.raises(ValueError, match="unknown environment"):
get_environment("does-not-exist")
def test_environment_store_installs_from_local_runtime_source(tmp_path: Path) -> None:
store = EnvironmentStore(runtime_paths=resolve_runtime_paths(), cache_dir=tmp_path / "cache")
installed = store.ensure_installed("debian:12")
assert installed.kernel_image.exists()
assert installed.rootfs_image.exists()
assert (installed.install_dir / "environment.json").exists()
def test_environment_store_pull_and_cached_inspect(tmp_path: Path) -> None:
store = EnvironmentStore(runtime_paths=resolve_runtime_paths(), cache_dir=tmp_path / "cache")
before = store.inspect_environment("debian:12")
assert before["installed"] is False
pulled = store.pull_environment("debian:12")
assert pulled["installed"] is True
assert "install_manifest" in pulled
cached = store.ensure_installed("debian:12")
assert cached.installed is True
after = store.inspect_environment("debian:12")
assert after["installed"] is True
assert "install_manifest" in after
def test_environment_store_uses_env_override_for_default_cache_dir(
monkeypatch: pytest.MonkeyPatch, tmp_path: Path
) -> None:
monkeypatch.setenv("PYRO_ENVIRONMENT_CACHE_DIR", str(tmp_path / "override-cache"))
store = EnvironmentStore(runtime_paths=resolve_runtime_paths())
assert store.cache_dir == tmp_path / "override-cache"
def test_environment_store_installs_from_archive_when_runtime_source_missing(
tmp_path: Path, monkeypatch: pytest.MonkeyPatch
) -> None:
runtime_paths = resolve_runtime_paths()
source_environment = get_environment("debian:12-base", runtime_paths=runtime_paths)
archive_dir = tmp_path / "archive"
archive_dir.mkdir(parents=True, exist_ok=True)
(archive_dir / "vmlinux").write_text("kernel\n", encoding="utf-8")
(archive_dir / "rootfs.ext4").write_text("rootfs\n", encoding="utf-8")
archive_path = tmp_path / "environment.tgz"
with tarfile.open(archive_path, "w:gz") as archive:
archive.add(archive_dir / "vmlinux", arcname="vmlinux")
archive.add(archive_dir / "rootfs.ext4", arcname="rootfs.ext4")
missing_bundle = tmp_path / "bundle"
platform_root = missing_bundle / "linux-x86_64"
platform_root.mkdir(parents=True, exist_ok=True)
(missing_bundle / "NOTICE").write_text(
runtime_paths.notice_path.read_text(encoding="utf-8"),
encoding="utf-8",
)
(platform_root / "manifest.json").write_text(
runtime_paths.manifest_path.read_text(encoding="utf-8"),
encoding="utf-8",
)
(platform_root / "bin").mkdir(parents=True, exist_ok=True)
(platform_root / "bin" / "firecracker").write_bytes(runtime_paths.firecracker_bin.read_bytes())
(platform_root / "bin" / "jailer").write_bytes(runtime_paths.jailer_bin.read_bytes())
guest_agent_path = runtime_paths.guest_agent_path
if guest_agent_path is None:
raise AssertionError("expected guest agent path")
(platform_root / "guest").mkdir(parents=True, exist_ok=True)
(platform_root / "guest" / "pyro_guest_agent.py").write_text(
guest_agent_path.read_text(encoding="utf-8"),
encoding="utf-8",
)
monkeypatch.setenv("PYRO_RUNTIME_BUNDLE_DIR", str(missing_bundle))
monkeypatch.setattr(
"pyro_mcp.vm_environments.CATALOG",
{
"debian:12-base": source_environment.__class__(
name=source_environment.name,
version=source_environment.version,
description=source_environment.description,
default_packages=source_environment.default_packages,
distribution=source_environment.distribution,
distribution_version=source_environment.distribution_version,
source_profile=source_environment.source_profile,
platform=source_environment.platform,
source_url=archive_path.resolve().as_uri(),
source_digest=source_environment.source_digest,
compatibility=source_environment.compatibility,
)
},
)
store = EnvironmentStore(
runtime_paths=resolve_runtime_paths(verify_checksums=False),
cache_dir=tmp_path / "cache",
)
installed = store.ensure_installed("debian:12-base")
assert installed.kernel_image.read_text(encoding="utf-8") == "kernel\n"
assert installed.rootfs_image.read_text(encoding="utf-8") == "rootfs\n"
def test_environment_store_prunes_stale_entries(tmp_path: Path) -> None:
store = EnvironmentStore(runtime_paths=resolve_runtime_paths(), cache_dir=tmp_path / "cache")
platform_dir = store.cache_dir / "linux-x86_64"
platform_dir.mkdir(parents=True, exist_ok=True)
(platform_dir / ".partial-download").mkdir()
(platform_dir / "missing-marker").mkdir()
invalid = platform_dir / "invalid"
invalid.mkdir()
(invalid / "environment.json").write_text('{"name": 1, "version": 2}', encoding="utf-8")
unknown = platform_dir / "unknown"
unknown.mkdir()
(unknown / "environment.json").write_text(
'{"name": "unknown:1", "version": "1.0.0"}',
encoding="utf-8",
)
stale = platform_dir / "stale"
stale.mkdir()
(stale / "environment.json").write_text(
'{"name": "debian:12", "version": "0.9.0"}',
encoding="utf-8",
)
result = store.prune_environments()
assert result["count"] == 5

View file

@ -17,7 +17,12 @@ def test_vm_manager_lifecycle_and_auto_cleanup(tmp_path: Path) -> None:
base_dir=tmp_path / "vms",
network_manager=TapNetworkManager(enabled=False),
)
created = manager.create_vm(profile="debian-git", vcpu_count=1, mem_mib=512, ttl_seconds=600)
created = manager.create_vm(
environment="debian:12",
vcpu_count=1,
mem_mib=512,
ttl_seconds=600,
)
vm_id = str(created["vm_id"])
started = manager.start_vm(vm_id)
assert started["state"] == "started"
@ -37,9 +42,12 @@ def test_vm_manager_exec_timeout(tmp_path: Path) -> None:
network_manager=TapNetworkManager(enabled=False),
)
vm_id = str(
manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=600)[
"vm_id"
]
manager.create_vm(
environment="debian:12-base",
vcpu_count=1,
mem_mib=512,
ttl_seconds=600,
)["vm_id"]
)
manager.start_vm(vm_id)
result = manager.exec_vm(vm_id, command="sleep 2", timeout_seconds=1)
@ -54,9 +62,12 @@ def test_vm_manager_stop_and_delete(tmp_path: Path) -> None:
network_manager=TapNetworkManager(enabled=False),
)
vm_id = str(
manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=600)[
"vm_id"
]
manager.create_vm(
environment="debian:12-base",
vcpu_count=1,
mem_mib=512,
ttl_seconds=600,
)["vm_id"]
)
manager.start_vm(vm_id)
stopped = manager.stop_vm(vm_id)
@ -73,7 +84,12 @@ def test_vm_manager_reaps_expired(tmp_path: Path) -> None:
)
manager.MIN_TTL_SECONDS = 1
vm_id = str(
manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=1)["vm_id"]
manager.create_vm(
environment="debian:12-base",
vcpu_count=1,
mem_mib=512,
ttl_seconds=1,
)["vm_id"]
)
instance = manager._instances[vm_id] # noqa: SLF001
instance.expires_at = 0.0
@ -91,7 +107,12 @@ def test_vm_manager_reaps_started_vm(tmp_path: Path) -> None:
)
manager.MIN_TTL_SECONDS = 1
vm_id = str(
manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=1)["vm_id"]
manager.create_vm(
environment="debian:12-base",
vcpu_count=1,
mem_mib=512,
ttl_seconds=1,
)["vm_id"]
)
manager.start_vm(vm_id)
manager._instances[vm_id].expires_at = 0.0 # noqa: SLF001
@ -114,7 +135,7 @@ def test_vm_manager_validates_limits(tmp_path: Path, kwargs: dict[str, Any], msg
network_manager=TapNetworkManager(enabled=False),
)
with pytest.raises(ValueError, match=msg):
manager.create_vm(profile="debian-base", **kwargs)
manager.create_vm(environment="debian:12-base", **kwargs)
def test_vm_manager_max_active_limit(tmp_path: Path) -> None:
@ -124,9 +145,9 @@ def test_vm_manager_max_active_limit(tmp_path: Path) -> None:
max_active_vms=1,
network_manager=TapNetworkManager(enabled=False),
)
manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=600)
manager.create_vm(environment="debian:12-base", vcpu_count=1, mem_mib=512, ttl_seconds=600)
with pytest.raises(RuntimeError, match="max active VMs reached"):
manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=600)
manager.create_vm(environment="debian:12-base", vcpu_count=1, mem_mib=512, ttl_seconds=600)
def test_vm_manager_state_validation(tmp_path: Path) -> None:
@ -136,9 +157,12 @@ def test_vm_manager_state_validation(tmp_path: Path) -> None:
network_manager=TapNetworkManager(enabled=False),
)
vm_id = str(
manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=600)[
"vm_id"
]
manager.create_vm(
environment="debian:12-base",
vcpu_count=1,
mem_mib=512,
ttl_seconds=600,
)["vm_id"]
)
with pytest.raises(RuntimeError, match="must be in 'started' state"):
manager.exec_vm(vm_id, command="echo hi", timeout_seconds=30)
@ -157,7 +181,12 @@ def test_vm_manager_status_expired_raises(tmp_path: Path) -> None:
)
manager.MIN_TTL_SECONDS = 1
vm_id = str(
manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=1)["vm_id"]
manager.create_vm(
environment="debian:12-base",
vcpu_count=1,
mem_mib=512,
ttl_seconds=1,
)["vm_id"]
)
manager._instances[vm_id].expires_at = 0.0 # noqa: SLF001
with pytest.raises(RuntimeError, match="expired and was automatically deleted"):
@ -179,7 +208,12 @@ def test_vm_manager_network_info(tmp_path: Path) -> None:
base_dir=tmp_path / "vms",
network_manager=TapNetworkManager(enabled=False),
)
created = manager.create_vm(profile="debian-base", vcpu_count=1, mem_mib=512, ttl_seconds=600)
created = manager.create_vm(
environment="debian:12-base",
vcpu_count=1,
mem_mib=512,
ttl_seconds=600,
)
vm_id = str(created["vm_id"])
status = manager.status_vm(vm_id)
info = manager.network_info_vm(vm_id)
@ -195,7 +229,7 @@ def test_vm_manager_run_vm(tmp_path: Path) -> None:
network_manager=TapNetworkManager(enabled=False),
)
result = manager.run_vm(
profile="debian-base",
environment="debian:12-base",
command="printf 'ok\\n'",
vcpu_count=1,
mem_mib=512,
@ -213,13 +247,13 @@ def test_vm_manager_firecracker_backend_path(
class StubFirecrackerBackend:
def __init__(
self,
artifacts_dir: Path,
environment_store: Any,
firecracker_bin: Path,
jailer_bin: Path,
runtime_capabilities: Any,
network_manager: TapNetworkManager,
) -> None:
self.artifacts_dir = artifacts_dir
self.environment_store = environment_store
self.firecracker_bin = firecracker_bin
self.jailer_bin = jailer_bin
self.runtime_capabilities = runtime_capabilities

View file

@ -1,24 +0,0 @@
from __future__ import annotations
from pathlib import Path
import pytest
from pyro_mcp.vm_profiles import get_profile, list_profiles, resolve_artifacts
def test_list_profiles_includes_expected_entries() -> None:
profiles = list_profiles()
names = {str(entry["name"]) for entry in profiles}
assert {"debian-base", "debian-git", "debian-build"} <= names
def test_get_profile_rejects_unknown() -> None:
with pytest.raises(ValueError, match="unknown profile"):
get_profile("does-not-exist")
def test_resolve_artifacts() -> None:
artifacts = resolve_artifacts(Path("/tmp/artifacts"), "debian-git")
assert str(artifacts.kernel_image).endswith("/debian-git/vmlinux")
assert str(artifacts.rootfs_image).endswith("/debian-git/rootfs.ext4")