aman/docs/developer-workflows.md
Thales Maciel 31a1e069b3
Prepare the 1.0.0 GA release surface
Add the repo-side pieces for milestone 5: MIT licensing, real maintainer and forge metadata, a public support doc, 1.0.0 release notes, release-prep tooling, and CI uploads for the full candidate artifact set.

Keep source-tree version surfaces honest by reading the local project version in the CLI and About dialog, and cover the new release-prep plus version-fallback behavior with focused tests.

Document where raw validation evidence belongs, add the GA validation rollup, and archive the latest readiness review. Milestone 5 remains open until the forge release page is published and the milestone 2 and 3 matrices are filled with linked manual evidence.

Validation: PYTHONPATH=src python3 -m unittest discover -s tests -p 'test_*.py'; PYTHONPATH=src python3 -m unittest tests.test_release_prep tests.test_portable_bundle tests.test_aman_cli tests.test_config_ui; python3 -m py_compile src/*.py tests/*.py; PYTHONPATH=src python3 -m aman version
2026-03-12 19:36:52 -03:00

101 lines
3.5 KiB
Markdown

# Developer And Maintainer Workflows
This document keeps build, packaging, development, and benchmarking material
out of the first-run README path.
## Build and packaging
```bash
make build
make package
make package-portable
make package-deb
make package-arch
make runtime-check
make release-check
make release-prep
```
- `make package-portable` builds `dist/aman-x11-linux-<version>.tar.gz` plus
its `.sha256` file.
- `make release-prep` runs `make release-check`, builds the packaged artifacts,
and writes `dist/SHA256SUMS` for the release page upload set.
- `make package-deb` installs Python dependencies while creating the package.
- For offline Debian packaging, set `AMAN_WHEELHOUSE_DIR` to a directory
containing the required wheels.
For `1.0.0`, the manual publication target is the forge release page at
`https://git.thaloco.com/thaloco/aman/releases`, using
[`docs/releases/1.0.0.md`](./releases/1.0.0.md) as the release-notes source.
## Developer setup
`uv` workflow:
```bash
uv sync --extra x11
uv run aman run --config ~/.config/aman/config.json
```
`pip` workflow:
```bash
make install-local
aman run --config ~/.config/aman/config.json
```
## Support and control commands
```bash
make run
make run config.example.json
make doctor
make self-check
make runtime-check
make eval-models
make sync-default-model
make check-default-model
make check
```
CLI examples:
```bash
aman doctor --config ~/.config/aman/config.json --json
aman self-check --config ~/.config/aman/config.json --json
aman run --config ~/.config/aman/config.json
aman bench --text "example transcript" --repeat 5 --warmup 1
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl --json
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --json
aman sync-default-model --check --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
aman version
aman init --config ~/.config/aman/config.json --force
```
## Benchmarking
```bash
aman bench --text "draft a short email to Marta confirming lunch" --repeat 10 --warmup 2
aman bench --text-file ./bench-input.txt --repeat 20 --json
```
`bench` does not capture audio and never injects text to desktop apps. It runs
the processing path from input transcript text through
alignment/editor/fact-guard/vocabulary cleanup and prints timing summaries.
## Model evaluation
```bash
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --output benchmarks/results/latest.json
aman sync-default-model --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
```
- `eval-models` runs a structured model/parameter sweep over a JSONL dataset
and outputs latency plus quality metrics.
- When `--heuristic-dataset` is provided, the report also includes
alignment-heuristic quality metrics.
- `sync-default-model` promotes the report winner to the managed default model
constants and can be run in `--check` mode for CI and release gates.
Dataset and artifact details live in [`benchmarks/README.md`](../benchmarks/README.md).