aman/docs/developer-workflows.md
Thales Maciel 721248ca26
Decouple non-UI CLI startup from config_ui
Stop aman.py from importing the GTK settings module at module load so version, init, bench, diagnostics, and top-level help can start without pulling in the UI stack.\n\nPromote PyGObject and python-xlib into main project dependencies, switch the documented source install surface to plain uv/pip commands, and teach the portable, deb, and Arch packaging flows to install filtered runtime requirements before the Aman wheel so they still rely on distro-provided GTK/X11 packages.\n\nAdd regression coverage for importing aman with config_ui blocked and for the portable bundle's new requirements payload, then rerun the focused CLI/diagnostics/portable tests plus py_compile.
2026-03-14 13:38:15 -03:00

3.5 KiB

Developer And Maintainer Workflows

This document keeps build, packaging, development, and benchmarking material out of the first-run README path.

Build and packaging

make build
make package
make package-portable
make package-deb
make package-arch
make runtime-check
make release-check
make release-prep
  • make package-portable builds dist/aman-x11-linux-<version>.tar.gz plus its .sha256 file.
  • make release-prep runs make release-check, builds the packaged artifacts, and writes dist/SHA256SUMS for the release page upload set.
  • make package-deb installs Python dependencies while creating the package.
  • For offline Debian packaging, set AMAN_WHEELHOUSE_DIR to a directory containing the required wheels.

For 1.0.0, the manual publication target is the forge release page at https://git.thaloco.com/thaloco/aman/releases, using docs/releases/1.0.0.md as the release-notes source.

Developer setup

uv workflow:

uv sync
uv run aman run --config ~/.config/aman/config.json

pip workflow:

make install-local
aman run --config ~/.config/aman/config.json

Support and control commands

make run
make run config.example.json
make doctor
make self-check
make runtime-check
make eval-models
make sync-default-model
make check-default-model
make check

CLI examples:

aman doctor --config ~/.config/aman/config.json --json
aman self-check --config ~/.config/aman/config.json --json
aman run --config ~/.config/aman/config.json
aman bench --text "example transcript" --repeat 5 --warmup 1
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl --json
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --json
aman sync-default-model --check --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
aman version
aman init --config ~/.config/aman/config.json --force

Benchmarking

aman bench --text "draft a short email to Marta confirming lunch" --repeat 10 --warmup 2
aman bench --text-file ./bench-input.txt --repeat 20 --json

bench does not capture audio and never injects text to desktop apps. It runs the processing path from input transcript text through alignment/editor/fact-guard/vocabulary cleanup and prints timing summaries.

Model evaluation

aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --output benchmarks/results/latest.json
aman sync-default-model --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
  • eval-models runs a structured model/parameter sweep over a JSONL dataset and outputs latency plus quality metrics.
  • When --heuristic-dataset is provided, the report also includes alignment-heuristic quality metrics.
  • sync-default-model promotes the report winner to the managed default model constants and can be run in --check mode for CI and release gates.

Dataset and artifact details live in benchmarks/README.md.