Stop implying that one Ubuntu 3.11 unit lane validates the full Linux support surface Aman documents.\n\nSplit CI into an Ubuntu CPython 3.10/3.11/3.12 unit-package matrix, a portable install plus doctor smoke lane, and a packaging lane gated on both. Add a reproducible ci_portable_smoke.sh helper with fake systemctl coverage, and force the installer onto /usr/bin/python3 so the smoke path uses the distro-provided GI and X11 Python packages it is meant to validate.\n\nUpdate the README, release/distribution docs, and Debian metadata to distinguish the automated Ubuntu CI floor from broader manual GA signoff families, and add the missing AppIndicator introspection package to the Ubuntu/Debian dependency lists.\n\nValidate with python3 -m unittest discover -s tests -p 'test_*.py', python3 -m py_compile src/*.py tests/*.py, and bash -n scripts/ci_portable_smoke.sh. The full xvfb-backed smoke could not be run locally in this sandbox because xvfb-run is unavailable.
3.5 KiB
3.5 KiB
Developer And Maintainer Workflows
This document keeps build, packaging, development, and benchmarking material out of the first-run README path.
Build and packaging
make build
make package
make package-portable
make package-deb
make package-arch
make runtime-check
make release-check
make release-prep
bash ./scripts/ci_portable_smoke.sh
make package-portablebuildsdist/aman-x11-linux-<version>.tar.gzplus its.sha256file.bash ./scripts/ci_portable_smoke.shreproduces the Ubuntu CI portable install plusaman doctorsmoke path locally.make release-preprunsmake release-check, builds the packaged artifacts, and writesdist/SHA256SUMSfor the release page upload set.make package-debinstalls Python dependencies while creating the package.- For offline Debian packaging, set
AMAN_WHEELHOUSE_DIRto a directory containing the required wheels.
For 1.0.0, the manual publication target is the forge release page at
https://git.thaloco.com/thaloco/aman/releases, using
docs/releases/1.0.0.md as the release-notes source.
Developer setup
uv workflow:
uv sync
uv run aman run --config ~/.config/aman/config.json
pip workflow:
make install-local
aman run --config ~/.config/aman/config.json
Support and control commands
make run
make run config.example.json
make doctor
make self-check
make runtime-check
make eval-models
make sync-default-model
make check-default-model
make check
CLI examples:
aman doctor --config ~/.config/aman/config.json --json
aman self-check --config ~/.config/aman/config.json --json
aman run --config ~/.config/aman/config.json
aman bench --text "example transcript" --repeat 5 --warmup 1
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl --json
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --json
aman version
aman init --config ~/.config/aman/config.json --force
Benchmarking
aman bench --text "draft a short email to Marta confirming lunch" --repeat 10 --warmup 2
aman bench --text-file ./bench-input.txt --repeat 20 --json
bench does not capture audio and never injects text to desktop apps. It runs
the processing path from input transcript text through
alignment/editor/fact-guard/vocabulary cleanup and prints timing summaries.
Model evaluation
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --output benchmarks/results/latest.json
make sync-default-model
eval-modelsruns a structured model/parameter sweep over a JSONL dataset and outputs latency plus quality metrics.- When
--heuristic-datasetis provided, the report also includes alignment-heuristic quality metrics. make sync-default-modelpromotes the report winner to the managed default model constants andmake check-default-modelkeeps that drift check in CI.
Internal maintainer CLI:
aman-maint sync-default-model --check --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
Dataset and artifact details live in benchmarks/README.md.