Add benchmark-driven model promotion workflow and pipeline stages
Some checks failed
ci / test-and-build (push) Has been cancelled
Some checks failed
ci / test-and-build (push) Has been cancelled
This commit is contained in:
parent
98b13d1069
commit
8c1f7c1e13
38 changed files with 5300 additions and 503 deletions
95
README.md
95
README.md
|
|
@ -102,7 +102,8 @@ It includes sections for:
|
|||
- hotkey
|
||||
- output backend
|
||||
- writing profile
|
||||
- runtime and model strategy
|
||||
- output safety policy
|
||||
- runtime strategy (managed vs custom Whisper path)
|
||||
- help/about actions
|
||||
|
||||
## Config
|
||||
|
|
@ -120,25 +121,18 @@ Create `~/.config/aman/config.json` (or let `aman` create it automatically on fi
|
|||
"device": "cpu",
|
||||
"language": "auto"
|
||||
},
|
||||
"llm": { "provider": "local_llama" },
|
||||
"models": {
|
||||
"allow_custom_models": false,
|
||||
"whisper_model_path": "",
|
||||
"llm_model_path": ""
|
||||
},
|
||||
"external_api": {
|
||||
"enabled": false,
|
||||
"provider": "openai",
|
||||
"base_url": "https://api.openai.com/v1",
|
||||
"model": "gpt-4o-mini",
|
||||
"timeout_ms": 15000,
|
||||
"max_retries": 2,
|
||||
"api_key_env_var": "AMAN_EXTERNAL_API_KEY"
|
||||
"whisper_model_path": ""
|
||||
},
|
||||
"injection": {
|
||||
"backend": "clipboard",
|
||||
"remove_transcription_from_clipboard": false
|
||||
},
|
||||
"safety": {
|
||||
"enabled": true,
|
||||
"strict": false
|
||||
},
|
||||
"ux": {
|
||||
"profile": "default",
|
||||
"show_notifications": true
|
||||
|
|
@ -172,6 +166,9 @@ Profile options:
|
|||
- `ux.profile=default`: baseline cleanup behavior.
|
||||
- `ux.profile=fast`: lower-latency AI generation settings.
|
||||
- `ux.profile=polished`: same cleanup depth as default.
|
||||
- `safety.enabled=true`: enables fact-preservation checks (names/numbers/IDs/URLs).
|
||||
- `safety.strict=false`: fallback to safer draft when fact checks fail.
|
||||
- `safety.strict=true`: reject output when fact checks fail.
|
||||
- `advanced.strict_startup=true`: keep fail-fast startup validation behavior.
|
||||
|
||||
Transcription language:
|
||||
|
|
@ -185,8 +182,18 @@ Hotkey notes:
|
|||
- Use one key plus optional modifiers (for example `Cmd+m`, `Super+m`, `Ctrl+space`).
|
||||
- `Super` and `Cmd` are equivalent aliases for the same modifier.
|
||||
|
||||
AI cleanup is always enabled and uses the locked local Llama-3.2-3B GGUF model
|
||||
AI cleanup is always enabled and uses the locked local Qwen2.5-1.5B GGUF model
|
||||
downloaded to `~/.cache/aman/models/` during daemon initialization.
|
||||
Prompts are structured with semantic XML tags for both system and user messages
|
||||
to improve instruction adherence and output consistency.
|
||||
Cleanup runs in two local passes:
|
||||
- pass 1 drafts cleaned text and labels ambiguity decisions (correction/literal/spelling/filler)
|
||||
- pass 2 audits those decisions conservatively and emits final `cleaned_text`
|
||||
This keeps Aman in dictation mode: it does not execute editing instructions embedded in transcript text.
|
||||
Before Aman reports `ready`, local llama runs a tiny warmup completion so the
|
||||
first real transcription is faster.
|
||||
If warmup fails and `advanced.strict_startup=true`, startup fails fast.
|
||||
With `advanced.strict_startup=false`, Aman logs a warning and continues.
|
||||
Model downloads use a network timeout and SHA256 verification before activation.
|
||||
Cached models are checksum-verified on startup; mismatches trigger a forced
|
||||
redownload.
|
||||
|
|
@ -195,10 +202,9 @@ Provider policy:
|
|||
|
||||
- `Aman-managed` mode (recommended) is the canonical supported UX:
|
||||
Aman handles model lifecycle and safe defaults for you.
|
||||
- `Expert mode` is opt-in and exposes custom providers/models for advanced users.
|
||||
- External API auth is environment-variable based (`external_api.api_key_env_var`);
|
||||
no API key is stored in config.
|
||||
- Custom local model paths are only active with `models.allow_custom_models=true`.
|
||||
- `Expert mode` is opt-in and exposes a custom Whisper model path for advanced users.
|
||||
- Editor model/provider configuration is intentionally not exposed in config.
|
||||
- Custom Whisper paths are only active with `models.allow_custom_models=true`.
|
||||
|
||||
Use `-v/--verbose` to enable DEBUG logs, including recognized/processed
|
||||
transcript text and llama.cpp logs (`llama::` prefix). Without `-v`, logs are
|
||||
|
|
@ -213,8 +219,17 @@ Vocabulary correction:
|
|||
|
||||
STT hinting:
|
||||
|
||||
- Vocabulary is passed to Whisper as `hotwords`/`initial_prompt` only when those
|
||||
arguments are supported by the installed `faster-whisper` runtime.
|
||||
- Vocabulary is passed to Whisper as compact `hotwords` only when that argument
|
||||
is supported by the installed `faster-whisper` runtime.
|
||||
- Aman enables `word_timestamps` when supported and runs a conservative
|
||||
alignment heuristic pass (self-correction/restart detection) before the editor
|
||||
stage.
|
||||
|
||||
Fact guard:
|
||||
|
||||
- Aman runs a deterministic fact-preservation verifier after editor output.
|
||||
- If facts are changed/invented and `safety.strict=false`, Aman falls back to the safer aligned draft.
|
||||
- If facts are changed/invented and `safety.strict=true`, processing fails and output is not injected.
|
||||
|
||||
## systemd user service
|
||||
|
||||
|
|
@ -249,10 +264,10 @@ Injection backends:
|
|||
- `injection`: type the text with simulated keypresses (XTest)
|
||||
- `injection.remove_transcription_from_clipboard`: when `true` and backend is `clipboard`, restores/clears the clipboard after paste so the transcript is not kept there
|
||||
|
||||
AI processing:
|
||||
Editor stage:
|
||||
|
||||
- Default local llama.cpp model.
|
||||
- Optional external API provider through `llm.provider=external_api`.
|
||||
- Canonical local llama.cpp editor model (managed by Aman).
|
||||
- Runtime flow is explicit: `ASR -> Alignment Heuristics -> Editor -> Fact Guard -> Vocabulary -> Injection`.
|
||||
|
||||
Build and packaging (maintainers):
|
||||
|
||||
|
|
@ -268,6 +283,33 @@ make release-check
|
|||
For offline packaging, set `AMAN_WHEELHOUSE_DIR` to a directory containing the
|
||||
required wheels.
|
||||
|
||||
Benchmarking (STT bypass, always dry):
|
||||
|
||||
```bash
|
||||
aman bench --text "draft a short email to Marta confirming lunch" --repeat 10 --warmup 2
|
||||
aman bench --text-file ./bench-input.txt --repeat 20 --json
|
||||
```
|
||||
|
||||
`bench` does not capture audio and never injects text to desktop apps. It runs
|
||||
the processing path from input transcript text through alignment/editor/fact-guard/vocabulary cleanup and
|
||||
prints timing summaries.
|
||||
|
||||
Model evaluation lab (dataset + matrix sweep):
|
||||
|
||||
```bash
|
||||
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl
|
||||
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --output benchmarks/results/latest.json
|
||||
aman sync-default-model --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
|
||||
```
|
||||
|
||||
`eval-models` runs a structured model/parameter sweep over a JSONL dataset and
|
||||
outputs latency + quality metrics (including hybrid score, pass-1/pass-2 latency breakdown,
|
||||
and correction safety metrics for `I mean` and spelling-disambiguation cases).
|
||||
When `--heuristic-dataset` is provided, the report also includes alignment-heuristic
|
||||
quality metrics (exact match, token-F1, rule precision/recall, per-tag breakdown).
|
||||
`sync-default-model` promotes the report winner to the managed default model constants
|
||||
using the artifact registry and can be run in `--check` mode for CI/release gates.
|
||||
|
||||
Control:
|
||||
|
||||
```bash
|
||||
|
|
@ -275,6 +317,9 @@ make run
|
|||
make run config.example.json
|
||||
make doctor
|
||||
make self-check
|
||||
make eval-models
|
||||
make sync-default-model
|
||||
make check-default-model
|
||||
make check
|
||||
```
|
||||
|
||||
|
|
@ -298,6 +343,10 @@ CLI (internal/support fallback):
|
|||
aman run --config ~/.config/aman/config.json
|
||||
aman doctor --config ~/.config/aman/config.json --json
|
||||
aman self-check --config ~/.config/aman/config.json --json
|
||||
aman bench --text "example transcript" --repeat 5 --warmup 1
|
||||
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl --json
|
||||
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --json
|
||||
aman sync-default-model --check --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
|
||||
aman version
|
||||
aman init --config ~/.config/aman/config.json --force
|
||||
```
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue