Bootstrap pyro_mcp v0.0.1 with MCP static tool and Ollama demo
This commit is contained in:
commit
11d6f4bcb4
18 changed files with 1945 additions and 0 deletions
2
.gitattributes
vendored
Normal file
2
.gitattributes
vendored
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
src/pyro_mcp/runtime_bundle/**/rootfs.ext4 filter=lfs diff=lfs merge=lfs -text
|
||||
src/pyro_mcp/runtime_bundle/**/vmlinux filter=lfs diff=lfs merge=lfs -text
|
||||
10
.gitignore
vendored
Normal file
10
.gitignore
vendored
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
.venv/
|
||||
__pycache__/
|
||||
.mypy_cache/
|
||||
.pytest_cache/
|
||||
.ruff_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
dist/
|
||||
build/
|
||||
*.egg-info/
|
||||
18
.pre-commit-config.yaml
Normal file
18
.pre-commit-config.yaml
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: ruff-check
|
||||
name: ruff-check
|
||||
entry: uv run ruff check .
|
||||
language: system
|
||||
pass_filenames: false
|
||||
- id: mypy
|
||||
name: mypy
|
||||
entry: uv run mypy
|
||||
language: system
|
||||
pass_filenames: false
|
||||
- id: pytest
|
||||
name: pytest
|
||||
entry: uv run pytest
|
||||
language: system
|
||||
pass_filenames: false
|
||||
1
.python-version
Normal file
1
.python-version
Normal file
|
|
@ -0,0 +1 @@
|
|||
3.12
|
||||
32
AGENTS.md
Normal file
32
AGENTS.md
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
# AGENTS.md
|
||||
|
||||
Repository guidance for contributors and coding agents.
|
||||
|
||||
## Purpose
|
||||
|
||||
This repository ships `pyro-mcp`, a minimal MCP-compatible Python package with a static tool for demonstration and testing.
|
||||
|
||||
## Development Workflow
|
||||
|
||||
- Use `uv` for all Python environment and command execution.
|
||||
- Run `make setup` after cloning.
|
||||
- Run `make check` before opening a PR.
|
||||
- Use `make demo` to verify the static tool behavior manually.
|
||||
- Use `make ollama-demo` to validate model-triggered tool usage with Ollama.
|
||||
|
||||
## Quality Gates
|
||||
|
||||
- Linting: `ruff`
|
||||
- Type checking: `mypy` (strict mode)
|
||||
- Tests: `pytest` with coverage threshold
|
||||
|
||||
These checks run in pre-commit hooks and should all pass locally.
|
||||
|
||||
## Key API Contract
|
||||
|
||||
- Public factory: `pyro_mcp.create_server()`
|
||||
- Tool name: `hello_static`
|
||||
- Tool output:
|
||||
- `message`: `hello from pyro_mcp`
|
||||
- `status`: `ok`
|
||||
- `version`: `0.0.1`
|
||||
36
Makefile
Normal file
36
Makefile
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
PYTHON ?= uv run python
|
||||
OLLAMA_BASE_URL ?= http://localhost:11434/v1
|
||||
OLLAMA_MODEL ?= llama:3.2-3b
|
||||
|
||||
.PHONY: setup lint format typecheck test check demo ollama ollama-demo run-server install-hooks
|
||||
|
||||
setup:
|
||||
uv sync --dev
|
||||
|
||||
lint:
|
||||
uv run ruff check .
|
||||
|
||||
format:
|
||||
uv run ruff format .
|
||||
|
||||
typecheck:
|
||||
uv run mypy
|
||||
|
||||
test:
|
||||
uv run pytest
|
||||
|
||||
check: lint typecheck test
|
||||
|
||||
demo:
|
||||
uv run python examples/static_tool_demo.py
|
||||
|
||||
ollama: ollama-demo
|
||||
|
||||
ollama-demo:
|
||||
uv run pyro-mcp-ollama-demo --base-url "$(OLLAMA_BASE_URL)" --model "$(OLLAMA_MODEL)"
|
||||
|
||||
run-server:
|
||||
uv run pyro-mcp-server
|
||||
|
||||
install-hooks:
|
||||
uv run pre-commit install
|
||||
90
README.md
Normal file
90
README.md
Normal file
|
|
@ -0,0 +1,90 @@
|
|||
# pyro-mcp
|
||||
|
||||
`pyro-mcp` is a minimal Python library that exposes an MCP-compatible server with one static tool.
|
||||
|
||||
## v0.0.1 Features
|
||||
|
||||
- Official Python MCP SDK integration.
|
||||
- Public server factory: `pyro_mcp.create_server()`.
|
||||
- One static MCP tool: `hello_static`.
|
||||
- Runnable demonstration script.
|
||||
- Project automation via `Makefile`, `pre-commit`, `ruff`, `mypy`, and `pytest`.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.12+
|
||||
- `uv` installed
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
make setup
|
||||
```
|
||||
|
||||
This installs runtime and development dependencies into `.venv`.
|
||||
|
||||
## Run the demo
|
||||
|
||||
```bash
|
||||
make demo
|
||||
```
|
||||
|
||||
Expected output:
|
||||
|
||||
```json
|
||||
{
|
||||
"message": "hello from pyro_mcp",
|
||||
"status": "ok",
|
||||
"version": "0.0.1"
|
||||
}
|
||||
```
|
||||
|
||||
## Run the Ollama tool-calling demo
|
||||
|
||||
Start Ollama and ensure the model is available (defaults to `llama:3.2-3b`):
|
||||
|
||||
```bash
|
||||
ollama serve
|
||||
ollama pull llama:3.2-3b
|
||||
```
|
||||
|
||||
Then run:
|
||||
|
||||
```bash
|
||||
make ollama-demo
|
||||
```
|
||||
|
||||
You can also run `make ollama demo` to execute both demos in one command.
|
||||
The Make target defaults to model `llama:3.2-3b` and can be overridden:
|
||||
|
||||
```bash
|
||||
make ollama-demo OLLAMA_MODEL=llama3.2:3b
|
||||
```
|
||||
|
||||
## Run checks
|
||||
|
||||
```bash
|
||||
make check
|
||||
```
|
||||
|
||||
`make check` runs:
|
||||
|
||||
- `ruff` lint checks
|
||||
- `mypy` type checks
|
||||
- `pytest` (with coverage threshold configured in `pyproject.toml`)
|
||||
|
||||
## Run MCP server (stdio transport)
|
||||
|
||||
```bash
|
||||
make run-server
|
||||
```
|
||||
|
||||
## Pre-commit
|
||||
|
||||
Install hooks:
|
||||
|
||||
```bash
|
||||
make install-hooks
|
||||
```
|
||||
|
||||
Hooks run `ruff`, `mypy`, and `pytest` on each commit.
|
||||
8
examples/ollama_tool_demo.py
Normal file
8
examples/ollama_tool_demo.py
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
"""Run the Ollama tool-calling demo."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pyro_mcp.ollama_demo import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
17
examples/static_tool_demo.py
Normal file
17
examples/static_tool_demo.py
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
"""Example script that proves the static MCP tool works."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
|
||||
from pyro_mcp.demo import run_demo
|
||||
|
||||
|
||||
def main() -> None:
|
||||
payload = asyncio.run(run_demo())
|
||||
print(json.dumps(payload, indent=2, sort_keys=True))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
47
pyproject.toml
Normal file
47
pyproject.toml
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
[project]
|
||||
name = "pyro-mcp"
|
||||
version = "0.0.1"
|
||||
description = "A minimal MCP-ready Python tool library."
|
||||
readme = "README.md"
|
||||
authors = [
|
||||
{ name = "Thales Maciel", email = "thales@thalesmaciel.com" }
|
||||
]
|
||||
requires-python = ">=3.12"
|
||||
dependencies = [
|
||||
"mcp>=1.26.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
pyro-mcp-server = "pyro_mcp.server:main"
|
||||
pyro-mcp-demo = "pyro_mcp.demo:main"
|
||||
pyro-mcp-ollama-demo = "pyro_mcp.ollama_demo:main"
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[dependency-groups]
|
||||
dev = [
|
||||
"mypy>=1.19.1",
|
||||
"pre-commit>=4.5.1",
|
||||
"pytest>=9.0.2",
|
||||
"pytest-cov>=7.0.0",
|
||||
"ruff>=0.15.4",
|
||||
]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
addopts = "--cov=pyro_mcp --cov-report=term-missing --cov-fail-under=90"
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py312"
|
||||
line-length = 100
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "B"]
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.12"
|
||||
strict = true
|
||||
warn_unused_configs = true
|
||||
files = ["src", "tests", "examples"]
|
||||
5
src/pyro_mcp/__init__.py
Normal file
5
src/pyro_mcp/__init__.py
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
"""Public package surface for pyro_mcp."""
|
||||
|
||||
from pyro_mcp.server import HELLO_STATIC_PAYLOAD, create_server
|
||||
|
||||
__all__ = ["HELLO_STATIC_PAYLOAD", "create_server"]
|
||||
35
src/pyro_mcp/demo.py
Normal file
35
src/pyro_mcp/demo.py
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
"""Runnable demonstration for the static MCP tool."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
from collections.abc import Sequence
|
||||
|
||||
from mcp.types import TextContent
|
||||
|
||||
from pyro_mcp.server import HELLO_STATIC_PAYLOAD, create_server
|
||||
|
||||
|
||||
async def run_demo() -> dict[str, str]:
|
||||
"""Call the static MCP tool and return its structured payload."""
|
||||
server = create_server()
|
||||
result = await server.call_tool("hello_static", {})
|
||||
blocks, structured = result
|
||||
|
||||
are_text_blocks = all(isinstance(item, TextContent) for item in blocks)
|
||||
if not isinstance(blocks, Sequence) or not are_text_blocks:
|
||||
raise TypeError("unexpected MCP content block output")
|
||||
if not isinstance(structured, dict):
|
||||
raise TypeError("expected a structured dictionary payload")
|
||||
if structured != HELLO_STATIC_PAYLOAD:
|
||||
raise ValueError("static payload did not match expected value")
|
||||
|
||||
typed: dict[str, str] = {str(key): str(value) for key, value in structured.items()}
|
||||
return typed
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Run the demonstration and print the JSON payload."""
|
||||
payload = asyncio.run(run_demo())
|
||||
print(json.dumps(payload, indent=2, sort_keys=True))
|
||||
175
src/pyro_mcp/ollama_demo.py
Normal file
175
src/pyro_mcp/ollama_demo.py
Normal file
|
|
@ -0,0 +1,175 @@
|
|||
"""Ollama chat-completions demo that triggers `hello_static` tool usage."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
import json
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from typing import Any, Final, cast
|
||||
|
||||
from pyro_mcp.demo import run_demo
|
||||
|
||||
DEFAULT_OLLAMA_BASE_URL: Final[str] = "http://localhost:11434/v1"
|
||||
DEFAULT_OLLAMA_MODEL: Final[str] = "llama:3.2-3b"
|
||||
TOOL_NAME: Final[str] = "hello_static"
|
||||
|
||||
TOOL_SPEC: Final[dict[str, Any]] = {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": TOOL_NAME,
|
||||
"description": "Returns a deterministic static payload from pyro_mcp.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {},
|
||||
"additionalProperties": False,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def _post_chat_completion(base_url: str, payload: dict[str, Any]) -> dict[str, Any]:
|
||||
endpoint = f"{base_url.rstrip('/')}/chat/completions"
|
||||
body = json.dumps(payload).encode("utf-8")
|
||||
request = urllib.request.Request(
|
||||
endpoint,
|
||||
data=body,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
)
|
||||
try:
|
||||
with urllib.request.urlopen(request, timeout=60) as response:
|
||||
response_text = response.read().decode("utf-8")
|
||||
except urllib.error.URLError as exc:
|
||||
raise RuntimeError(
|
||||
"failed to call Ollama. Ensure `ollama serve` is running and the model is available."
|
||||
) from exc
|
||||
|
||||
parsed = json.loads(response_text)
|
||||
if not isinstance(parsed, dict):
|
||||
raise TypeError("unexpected Ollama response shape")
|
||||
return cast(dict[str, Any], parsed)
|
||||
|
||||
|
||||
def _extract_message(response: dict[str, Any]) -> dict[str, Any]:
|
||||
choices = response.get("choices")
|
||||
if not isinstance(choices, list) or not choices:
|
||||
raise RuntimeError("Ollama response did not contain completion choices")
|
||||
first = choices[0]
|
||||
if not isinstance(first, dict):
|
||||
raise RuntimeError("unexpected completion choice format")
|
||||
message = first.get("message")
|
||||
if not isinstance(message, dict):
|
||||
raise RuntimeError("completion choice did not contain a message")
|
||||
return cast(dict[str, Any], message)
|
||||
|
||||
|
||||
def _parse_tool_arguments(raw_arguments: Any) -> dict[str, Any]:
|
||||
if raw_arguments is None:
|
||||
return {}
|
||||
if isinstance(raw_arguments, dict):
|
||||
return cast(dict[str, Any], raw_arguments)
|
||||
if isinstance(raw_arguments, str):
|
||||
if raw_arguments.strip() == "":
|
||||
return {}
|
||||
parsed = json.loads(raw_arguments)
|
||||
if not isinstance(parsed, dict):
|
||||
raise TypeError("tool arguments must decode to an object")
|
||||
return cast(dict[str, Any], parsed)
|
||||
raise TypeError("tool arguments must be a dictionary or JSON object string")
|
||||
|
||||
|
||||
def run_ollama_tool_demo(
|
||||
base_url: str = DEFAULT_OLLAMA_BASE_URL,
|
||||
model: str = DEFAULT_OLLAMA_MODEL,
|
||||
) -> dict[str, Any]:
|
||||
"""Ask Ollama to call the static tool, execute it, and return final model output."""
|
||||
messages: list[dict[str, Any]] = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": (
|
||||
"Use the hello_static tool and then summarize its payload in one short sentence."
|
||||
),
|
||||
}
|
||||
]
|
||||
first_payload: dict[str, Any] = {
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"tools": [TOOL_SPEC],
|
||||
"tool_choice": "auto",
|
||||
"temperature": 0,
|
||||
}
|
||||
first_response = _post_chat_completion(base_url, first_payload)
|
||||
assistant_message = _extract_message(first_response)
|
||||
|
||||
tool_calls = assistant_message.get("tool_calls")
|
||||
if not isinstance(tool_calls, list) or not tool_calls:
|
||||
raise RuntimeError("model did not trigger any tool call")
|
||||
|
||||
messages.append(
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": str(assistant_message.get("content") or ""),
|
||||
"tool_calls": tool_calls,
|
||||
}
|
||||
)
|
||||
|
||||
tool_payload: dict[str, str] | None = None
|
||||
for tool_call in tool_calls:
|
||||
if not isinstance(tool_call, dict):
|
||||
raise RuntimeError("invalid tool call entry returned by model")
|
||||
function = tool_call.get("function")
|
||||
if not isinstance(function, dict):
|
||||
raise RuntimeError("tool call did not include function metadata")
|
||||
name = function.get("name")
|
||||
if name != TOOL_NAME:
|
||||
raise RuntimeError(f"unexpected tool requested by model: {name!r}")
|
||||
arguments = _parse_tool_arguments(function.get("arguments"))
|
||||
if arguments:
|
||||
raise RuntimeError("hello_static does not accept arguments")
|
||||
call_id = tool_call.get("id")
|
||||
if not isinstance(call_id, str) or call_id == "":
|
||||
raise RuntimeError("tool call did not provide a valid call id")
|
||||
|
||||
tool_payload = asyncio.run(run_demo())
|
||||
messages.append(
|
||||
{
|
||||
"role": "tool",
|
||||
"tool_call_id": call_id,
|
||||
"name": TOOL_NAME,
|
||||
"content": json.dumps(tool_payload, sort_keys=True),
|
||||
}
|
||||
)
|
||||
|
||||
if tool_payload is None:
|
||||
raise RuntimeError("tool payload was not generated")
|
||||
|
||||
second_payload: dict[str, Any] = {
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"temperature": 0,
|
||||
}
|
||||
second_response = _post_chat_completion(base_url, second_payload)
|
||||
final_message = _extract_message(second_response)
|
||||
|
||||
return {
|
||||
"model": model,
|
||||
"tool_name": TOOL_NAME,
|
||||
"tool_payload": tool_payload,
|
||||
"final_response": str(final_message.get("content") or ""),
|
||||
}
|
||||
|
||||
|
||||
def _build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description="Run Ollama tool-calling demo for pyro_mcp.")
|
||||
parser.add_argument("--base-url", default=DEFAULT_OLLAMA_BASE_URL)
|
||||
parser.add_argument("--model", default=DEFAULT_OLLAMA_MODEL)
|
||||
return parser
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""CLI entrypoint for Ollama tool-calling demo."""
|
||||
args = _build_parser().parse_args()
|
||||
result = run_ollama_tool_demo(base_url=args.base_url, model=args.model)
|
||||
print(json.dumps(result, indent=2, sort_keys=True))
|
||||
30
src/pyro_mcp/server.py
Normal file
30
src/pyro_mcp/server.py
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
"""MCP server definition for the v0.0.1 static tool demo."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Final
|
||||
|
||||
from mcp.server.fastmcp import FastMCP
|
||||
|
||||
HELLO_STATIC_PAYLOAD: Final[dict[str, str]] = {
|
||||
"message": "hello from pyro_mcp",
|
||||
"status": "ok",
|
||||
"version": "0.0.1",
|
||||
}
|
||||
|
||||
|
||||
def create_server() -> FastMCP:
|
||||
"""Create and return a configured MCP server instance."""
|
||||
server = FastMCP(name="pyro_mcp")
|
||||
|
||||
@server.tool()
|
||||
async def hello_static() -> dict[str, str]:
|
||||
"""Return a deterministic static payload."""
|
||||
return HELLO_STATIC_PAYLOAD.copy()
|
||||
|
||||
return server
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Run the MCP server over stdio."""
|
||||
create_server().run(transport="stdio")
|
||||
86
tests/test_demo.py
Normal file
86
tests/test_demo.py
Normal file
|
|
@ -0,0 +1,86 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Sequence
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
from mcp.types import TextContent
|
||||
|
||||
import pyro_mcp.demo as demo_module
|
||||
from pyro_mcp.demo import run_demo
|
||||
from pyro_mcp.server import HELLO_STATIC_PAYLOAD
|
||||
|
||||
|
||||
def test_run_demo_returns_static_payload() -> None:
|
||||
payload = asyncio.run(run_demo())
|
||||
assert payload == HELLO_STATIC_PAYLOAD
|
||||
|
||||
|
||||
def test_run_demo_raises_for_non_text_blocks(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
class StubServer:
|
||||
async def call_tool(
|
||||
self,
|
||||
name: str,
|
||||
arguments: dict[str, Any],
|
||||
) -> tuple[Sequence[int], dict[str, str]]:
|
||||
assert name == "hello_static"
|
||||
assert arguments == {}
|
||||
return [123], HELLO_STATIC_PAYLOAD
|
||||
|
||||
monkeypatch.setattr(demo_module, "create_server", lambda: StubServer())
|
||||
|
||||
with pytest.raises(TypeError, match="unexpected MCP content block output"):
|
||||
asyncio.run(demo_module.run_demo())
|
||||
|
||||
|
||||
def test_run_demo_raises_for_non_dict_payload(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
class StubServer:
|
||||
async def call_tool(
|
||||
self,
|
||||
name: str,
|
||||
arguments: dict[str, Any],
|
||||
) -> tuple[list[TextContent], str]:
|
||||
assert name == "hello_static"
|
||||
assert arguments == {}
|
||||
return [TextContent(type="text", text="x")], "bad"
|
||||
|
||||
monkeypatch.setattr(demo_module, "create_server", lambda: StubServer())
|
||||
|
||||
with pytest.raises(TypeError, match="expected a structured dictionary payload"):
|
||||
asyncio.run(demo_module.run_demo())
|
||||
|
||||
|
||||
def test_run_demo_raises_for_unexpected_payload(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
class StubServer:
|
||||
async def call_tool(
|
||||
self,
|
||||
name: str,
|
||||
arguments: dict[str, Any],
|
||||
) -> tuple[list[TextContent], dict[str, str]]:
|
||||
assert name == "hello_static"
|
||||
assert arguments == {}
|
||||
return [TextContent(type="text", text="x")], {
|
||||
"message": "different",
|
||||
"status": "ok",
|
||||
"version": "0.0.1",
|
||||
}
|
||||
|
||||
monkeypatch.setattr(demo_module, "create_server", lambda: StubServer())
|
||||
|
||||
with pytest.raises(ValueError, match="static payload did not match expected value"):
|
||||
asyncio.run(demo_module.run_demo())
|
||||
|
||||
|
||||
def test_demo_main_prints_json(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
capsys: pytest.CaptureFixture[str],
|
||||
) -> None:
|
||||
async def fake_run_demo() -> dict[str, str]:
|
||||
return HELLO_STATIC_PAYLOAD
|
||||
|
||||
monkeypatch.setattr(demo_module, "run_demo", fake_run_demo)
|
||||
demo_module.main()
|
||||
|
||||
output = capsys.readouterr().out
|
||||
assert '"message": "hello from pyro_mcp"' in output
|
||||
255
tests/test_ollama_demo.py
Normal file
255
tests/test_ollama_demo.py
Normal file
|
|
@ -0,0 +1,255 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
|
||||
import pyro_mcp.ollama_demo as ollama_demo
|
||||
from pyro_mcp.server import HELLO_STATIC_PAYLOAD
|
||||
|
||||
|
||||
def test_run_ollama_tool_demo_triggers_tool_and_returns_final_response(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
requests: list[dict[str, Any]] = []
|
||||
|
||||
def fake_post_chat_completion(base_url: str, payload: dict[str, Any]) -> dict[str, Any]:
|
||||
assert base_url == "http://localhost:11434/v1"
|
||||
requests.append(payload)
|
||||
if len(requests) == 1:
|
||||
return {
|
||||
"choices": [
|
||||
{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": [
|
||||
{
|
||||
"id": "call_1",
|
||||
"type": "function",
|
||||
"function": {"name": "hello_static", "arguments": "{}"},
|
||||
}
|
||||
],
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
return {
|
||||
"choices": [
|
||||
{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "Tool says hello from pyro_mcp.",
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async def fake_run_demo() -> dict[str, str]:
|
||||
return HELLO_STATIC_PAYLOAD
|
||||
|
||||
monkeypatch.setattr(ollama_demo, "_post_chat_completion", fake_post_chat_completion)
|
||||
monkeypatch.setattr(ollama_demo, "run_demo", fake_run_demo)
|
||||
|
||||
result = ollama_demo.run_ollama_tool_demo()
|
||||
|
||||
assert result["tool_payload"] == HELLO_STATIC_PAYLOAD
|
||||
assert result["final_response"] == "Tool says hello from pyro_mcp."
|
||||
assert len(requests) == 2
|
||||
assert requests[0]["tools"][0]["function"]["name"] == "hello_static"
|
||||
tool_message = requests[1]["messages"][-1]
|
||||
assert tool_message["role"] == "tool"
|
||||
assert tool_message["tool_call_id"] == "call_1"
|
||||
|
||||
|
||||
def test_run_ollama_tool_demo_raises_when_model_does_not_call_tool(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
def fake_post_chat_completion(base_url: str, payload: dict[str, Any]) -> dict[str, Any]:
|
||||
del base_url, payload
|
||||
return {"choices": [{"message": {"role": "assistant", "content": "No tool call."}}]}
|
||||
|
||||
monkeypatch.setattr(ollama_demo, "_post_chat_completion", fake_post_chat_completion)
|
||||
|
||||
with pytest.raises(RuntimeError, match="model did not trigger any tool call"):
|
||||
ollama_demo.run_ollama_tool_demo()
|
||||
|
||||
|
||||
def test_run_ollama_tool_demo_raises_on_unexpected_tool(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
def fake_post_chat_completion(base_url: str, payload: dict[str, Any]) -> dict[str, Any]:
|
||||
del base_url, payload
|
||||
return {
|
||||
"choices": [
|
||||
{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": [
|
||||
{
|
||||
"id": "call_1",
|
||||
"type": "function",
|
||||
"function": {"name": "unexpected_tool", "arguments": "{}"},
|
||||
}
|
||||
],
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
monkeypatch.setattr(ollama_demo, "_post_chat_completion", fake_post_chat_completion)
|
||||
|
||||
with pytest.raises(RuntimeError, match="unexpected tool requested by model"):
|
||||
ollama_demo.run_ollama_tool_demo()
|
||||
|
||||
|
||||
def test_post_chat_completion_success(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
class StubResponse:
|
||||
def __enter__(self) -> StubResponse:
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type: object, exc: object, tb: object) -> None:
|
||||
del exc_type, exc, tb
|
||||
|
||||
def read(self) -> bytes:
|
||||
return b'{"ok": true}'
|
||||
|
||||
def fake_urlopen(request: Any, timeout: int) -> StubResponse:
|
||||
assert timeout == 60
|
||||
assert request.full_url == "http://localhost:11434/v1/chat/completions"
|
||||
return StubResponse()
|
||||
|
||||
monkeypatch.setattr(urllib.request, "urlopen", fake_urlopen)
|
||||
|
||||
result = ollama_demo._post_chat_completion("http://localhost:11434/v1", {"x": 1})
|
||||
assert result == {"ok": True}
|
||||
|
||||
|
||||
def test_post_chat_completion_raises_for_ollama_connection_error(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
def fake_urlopen(request: Any, timeout: int) -> Any:
|
||||
del request, timeout
|
||||
raise urllib.error.URLError("boom")
|
||||
|
||||
monkeypatch.setattr(urllib.request, "urlopen", fake_urlopen)
|
||||
|
||||
with pytest.raises(RuntimeError, match="failed to call Ollama"):
|
||||
ollama_demo._post_chat_completion("http://localhost:11434/v1", {"x": 1})
|
||||
|
||||
|
||||
def test_post_chat_completion_raises_for_non_object_response(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
class StubResponse:
|
||||
def __enter__(self) -> StubResponse:
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type: object, exc: object, tb: object) -> None:
|
||||
del exc_type, exc, tb
|
||||
|
||||
def read(self) -> bytes:
|
||||
return b'["not-an-object"]'
|
||||
|
||||
def fake_urlopen(request: Any, timeout: int) -> StubResponse:
|
||||
del request, timeout
|
||||
return StubResponse()
|
||||
|
||||
monkeypatch.setattr(urllib.request, "urlopen", fake_urlopen)
|
||||
|
||||
with pytest.raises(TypeError, match="unexpected Ollama response shape"):
|
||||
ollama_demo._post_chat_completion("http://localhost:11434/v1", {"x": 1})
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("response", "expected_error"),
|
||||
[
|
||||
({}, "did not contain completion choices"),
|
||||
({"choices": [1]}, "unexpected completion choice format"),
|
||||
({"choices": [{"message": "bad"}]}, "did not contain a message"),
|
||||
],
|
||||
)
|
||||
def test_extract_message_validation_errors(
|
||||
response: dict[str, Any],
|
||||
expected_error: str,
|
||||
) -> None:
|
||||
with pytest.raises(RuntimeError, match=expected_error):
|
||||
ollama_demo._extract_message(response)
|
||||
|
||||
|
||||
def test_parse_tool_arguments_variants() -> None:
|
||||
assert ollama_demo._parse_tool_arguments(None) == {}
|
||||
assert ollama_demo._parse_tool_arguments({}) == {}
|
||||
assert ollama_demo._parse_tool_arguments("") == {}
|
||||
assert ollama_demo._parse_tool_arguments('{"a": 1}') == {"a": 1}
|
||||
|
||||
|
||||
def test_parse_tool_arguments_rejects_invalid_types() -> None:
|
||||
with pytest.raises(TypeError, match="must decode to an object"):
|
||||
ollama_demo._parse_tool_arguments("[]")
|
||||
with pytest.raises(TypeError, match="must be a dictionary or JSON object string"):
|
||||
ollama_demo._parse_tool_arguments(123)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("tool_call", "expected_error"),
|
||||
[
|
||||
(1, "invalid tool call entry"),
|
||||
({"id": "c1"}, "did not include function metadata"),
|
||||
(
|
||||
{"id": "c1", "function": {"name": "hello_static", "arguments": '{"x": 1}'}},
|
||||
"does not accept arguments",
|
||||
),
|
||||
(
|
||||
{"id": "", "function": {"name": "hello_static", "arguments": "{}"}},
|
||||
"did not provide a valid call id",
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_run_ollama_tool_demo_validation_branches(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
tool_call: Any,
|
||||
expected_error: str,
|
||||
) -> None:
|
||||
def fake_post_chat_completion(base_url: str, payload: dict[str, Any]) -> dict[str, Any]:
|
||||
del base_url, payload
|
||||
return {
|
||||
"choices": [
|
||||
{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": [tool_call],
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
monkeypatch.setattr(ollama_demo, "_post_chat_completion", fake_post_chat_completion)
|
||||
|
||||
with pytest.raises(RuntimeError, match=expected_error):
|
||||
ollama_demo.run_ollama_tool_demo()
|
||||
|
||||
|
||||
def test_main_uses_parser_and_prints_json(
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
capsys: pytest.CaptureFixture[str],
|
||||
) -> None:
|
||||
class StubParser:
|
||||
def parse_args(self) -> argparse.Namespace:
|
||||
return argparse.Namespace(base_url="http://x", model="m")
|
||||
|
||||
monkeypatch.setattr(ollama_demo, "_build_parser", lambda: StubParser())
|
||||
monkeypatch.setattr(
|
||||
ollama_demo,
|
||||
"run_ollama_tool_demo",
|
||||
lambda base_url, model: {"base_url": base_url, "model": model},
|
||||
)
|
||||
|
||||
ollama_demo.main()
|
||||
|
||||
output = json.loads(capsys.readouterr().out)
|
||||
assert output == {"base_url": "http://x", "model": "m"}
|
||||
46
tests/test_server.py
Normal file
46
tests/test_server.py
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
from mcp.types import TextContent
|
||||
|
||||
import pyro_mcp.server as server_module
|
||||
from pyro_mcp.server import HELLO_STATIC_PAYLOAD, create_server
|
||||
|
||||
|
||||
def test_create_server_registers_static_tool() -> None:
|
||||
async def _run() -> list[str]:
|
||||
server = create_server()
|
||||
tools = await server.list_tools()
|
||||
return [tool.name for tool in tools]
|
||||
|
||||
tool_names = asyncio.run(_run())
|
||||
assert "hello_static" in tool_names
|
||||
|
||||
|
||||
def test_hello_static_returns_expected_payload() -> None:
|
||||
async def _run() -> tuple[list[TextContent], dict[str, Any]]:
|
||||
server = create_server()
|
||||
blocks, structured = await server.call_tool("hello_static", {})
|
||||
assert isinstance(blocks, list)
|
||||
assert all(isinstance(block, TextContent) for block in blocks)
|
||||
assert isinstance(structured, dict)
|
||||
return blocks, structured
|
||||
|
||||
_, structured_output = asyncio.run(_run())
|
||||
assert structured_output == HELLO_STATIC_PAYLOAD
|
||||
|
||||
|
||||
def test_server_main_runs_stdio_transport(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
called: dict[str, str] = {}
|
||||
|
||||
class StubServer:
|
||||
def run(self, transport: str) -> None:
|
||||
called["transport"] = transport
|
||||
|
||||
monkeypatch.setattr(server_module, "create_server", lambda: StubServer())
|
||||
server_module.main()
|
||||
|
||||
assert called == {"transport": "stdio"}
|
||||
Loading…
Add table
Add a link
Reference in a new issue