# Developer And Maintainer Workflows This document keeps build, packaging, development, and benchmarking material out of the first-run README path. ## Build and packaging ```bash make build make package make package-portable make package-deb make package-arch make runtime-check make release-check make release-prep bash ./scripts/ci_portable_smoke.sh ``` - `make package-portable` builds `dist/aman-x11-linux-.tar.gz` plus its `.sha256` file. - `bash ./scripts/ci_portable_smoke.sh` reproduces the Ubuntu CI portable install plus `aman doctor` smoke path locally. - `make release-prep` runs `make release-check`, builds the packaged artifacts, and writes `dist/SHA256SUMS` for the release page upload set. - `make package-deb` installs Python dependencies while creating the package. - For offline Debian packaging, set `AMAN_WHEELHOUSE_DIR` to a directory containing the required wheels. For `1.0.0`, the manual publication target is the forge release page at `https://git.thaloco.com/thaloco/aman/releases`, using [`docs/releases/1.0.0.md`](./releases/1.0.0.md) as the release-notes source. ## Developer setup `uv` workflow: ```bash python3 -m venv --system-site-packages .venv . .venv/bin/activate uv sync --active uv run aman run --config ~/.config/aman/config.json ``` Install the documented distro runtime dependencies first so the active virtualenv can see GTK/AppIndicator/X11 bindings from the system Python. `pip` workflow: ```bash make install-local aman run --config ~/.config/aman/config.json ``` ## Support and control commands ```bash make run make run config.example.json make doctor make self-check make runtime-check make eval-models make sync-default-model make check-default-model make check ``` CLI examples: ```bash aman doctor --config ~/.config/aman/config.json --json aman self-check --config ~/.config/aman/config.json --json aman run --config ~/.config/aman/config.json aman bench --text "example transcript" --repeat 5 --warmup 1 aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl --json aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --json aman version aman init --config ~/.config/aman/config.json --force ``` ## Benchmarking ```bash aman bench --text "draft a short email to Marta confirming lunch" --repeat 10 --warmup 2 aman bench --text-file ./bench-input.txt --repeat 20 --json ``` `bench` does not capture audio and never injects text to desktop apps. It runs the processing path from input transcript text through alignment/editor/fact-guard/vocabulary cleanup and prints timing summaries. ## Model evaluation ```bash aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --output benchmarks/results/latest.json make sync-default-model ``` - `eval-models` runs a structured model/parameter sweep over a JSONL dataset and outputs latency plus quality metrics. - When `--heuristic-dataset` is provided, the report also includes alignment-heuristic quality metrics. - `make sync-default-model` promotes the report winner to the managed default model constants and `make check-default-model` keeps that drift check in CI. Internal maintainer CLI: ```bash aman-maint sync-default-model --check --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py ``` Dataset and artifact details live in [`benchmarks/README.md`](../benchmarks/README.md).