aman/docs/developer-workflows.md
Thales Maciel 4d0081d1d0 Split aman.py into focused CLI and runtime modules
Break the old god module into flat siblings for CLI parsing, run lifecycle, daemon state, shared processing helpers, benchmark tooling, and maintainer-only model sync so changes stop sharing one giant import graph.

Keep aman as a thin shim over aman_cli, move sync-default-model behind the hidden aman-maint entrypoint plus Make wrappers, and update packaging metadata plus maintainer docs to reflect the new surface.

Retarget the tests to the new seams with dedicated runtime, run, benchmark, maintainer, and entrypoint suites, and verify with python3 -m unittest discover -s tests -p "test_*.py", python3 -m py_compile src/*.py tests/*.py, PYTHONPATH=src python3 -m aman --help, PYTHONPATH=src python3 -m aman version, and PYTHONPATH=src python3 -m aman_maint --help.
2026-03-14 14:54:57 -03:00

106 lines
3.4 KiB
Markdown

# Developer And Maintainer Workflows
This document keeps build, packaging, development, and benchmarking material
out of the first-run README path.
## Build and packaging
```bash
make build
make package
make package-portable
make package-deb
make package-arch
make runtime-check
make release-check
make release-prep
```
- `make package-portable` builds `dist/aman-x11-linux-<version>.tar.gz` plus
its `.sha256` file.
- `make release-prep` runs `make release-check`, builds the packaged artifacts,
and writes `dist/SHA256SUMS` for the release page upload set.
- `make package-deb` installs Python dependencies while creating the package.
- For offline Debian packaging, set `AMAN_WHEELHOUSE_DIR` to a directory
containing the required wheels.
For `1.0.0`, the manual publication target is the forge release page at
`https://git.thaloco.com/thaloco/aman/releases`, using
[`docs/releases/1.0.0.md`](./releases/1.0.0.md) as the release-notes source.
## Developer setup
`uv` workflow:
```bash
uv sync
uv run aman run --config ~/.config/aman/config.json
```
`pip` workflow:
```bash
make install-local
aman run --config ~/.config/aman/config.json
```
## Support and control commands
```bash
make run
make run config.example.json
make doctor
make self-check
make runtime-check
make eval-models
make sync-default-model
make check-default-model
make check
```
CLI examples:
```bash
aman doctor --config ~/.config/aman/config.json --json
aman self-check --config ~/.config/aman/config.json --json
aman run --config ~/.config/aman/config.json
aman bench --text "example transcript" --repeat 5 --warmup 1
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl --json
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --json
aman version
aman init --config ~/.config/aman/config.json --force
```
## Benchmarking
```bash
aman bench --text "draft a short email to Marta confirming lunch" --repeat 10 --warmup 2
aman bench --text-file ./bench-input.txt --repeat 20 --json
```
`bench` does not capture audio and never injects text to desktop apps. It runs
the processing path from input transcript text through
alignment/editor/fact-guard/vocabulary cleanup and prints timing summaries.
## Model evaluation
```bash
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --output benchmarks/results/latest.json
make sync-default-model
```
- `eval-models` runs a structured model/parameter sweep over a JSONL dataset
and outputs latency plus quality metrics.
- When `--heuristic-dataset` is provided, the report also includes
alignment-heuristic quality metrics.
- `make sync-default-model` promotes the report winner to the managed default
model constants and `make check-default-model` keeps that drift check in CI.
Internal maintainer CLI:
```bash
aman-maint sync-default-model --check --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
```
Dataset and artifact details live in [`benchmarks/README.md`](../benchmarks/README.md).