12 KiB
aman
Local amanuensis
Python X11 STT daemon that records audio, runs Whisper, applies local AI cleanup, and injects text.
Target User
The canonical Aman user is a desktop professional who wants dictation and rewriting features without learning Python tooling.
- End-user path: native OS package install.
- Developer path: Python/uv workflows.
Persona details and distribution policy are documented in
docs/persona-and-distribution.md.
Install (Recommended)
End users do not need uv.
Debian/Ubuntu (.deb)
Download a release artifact and install it:
sudo apt install ./aman_<version>_<arch>.deb
Then enable the user service:
systemctl --user daemon-reload
systemctl --user enable --now aman
Arch Linux
Use the generated packaging inputs (PKGBUILD + source tarball) in dist/arch/
or your own packaging pipeline.
Distribution Matrix
| Channel | Audience | Status |
|---|---|---|
Debian package (.deb) |
End users on Ubuntu/Debian | Canonical |
Arch PKGBUILD + source tarball |
Arch maintainers/power users | Supported |
| Python wheel/sdist | Developers/integrators | Supported |
Runtime Dependencies
- X11
- PortAudio runtime (
libportaudio2or distro equivalent) - GTK3 and AppIndicator runtime (
gtk3,libayatana-appindicator3) - Python GTK and X11 bindings (
python3-gi/python-gobject,python-xlib)
Ubuntu/Debian
sudo apt install -y libportaudio2 python3-gi python3-xlib gir1.2-gtk-3.0 libayatana-appindicator3-1
Arch Linux
sudo pacman -S --needed portaudio gtk3 libayatana-appindicator python-gobject python-xlib
Fedora
sudo dnf install -y portaudio gtk3 libayatana-appindicator-gtk3 python3-gobject python3-xlib
openSUSE
sudo zypper install -y portaudio gtk3 libayatana-appindicator3-1 python3-gobject python3-python-xlib
Quickstart
aman run
On first launch, Aman opens a graphical settings window automatically. It includes sections for:
- microphone input
- hotkey
- output backend
- writing profile
- output safety policy
- runtime strategy (managed vs custom Whisper path)
- help/about actions
Config
Create ~/.config/aman/config.json (or let aman create it automatically on first start if missing):
{
"config_version": 1,
"daemon": { "hotkey": "Cmd+m" },
"recording": { "input": "0" },
"stt": {
"provider": "local_whisper",
"model": "base",
"device": "cpu",
"language": "auto"
},
"models": {
"allow_custom_models": false,
"whisper_model_path": ""
},
"injection": {
"backend": "clipboard",
"remove_transcription_from_clipboard": false
},
"safety": {
"enabled": true,
"strict": false
},
"ux": {
"profile": "default",
"show_notifications": true
},
"advanced": {
"strict_startup": true
},
"vocabulary": {
"replacements": [
{ "from": "Martha", "to": "Marta" },
{ "from": "docker", "to": "Docker" }
],
"terms": ["Systemd", "Kubernetes"]
}
}
config_version is required and currently must be 1. Legacy unversioned
configs are migrated automatically on load.
Recording input can be a device index (preferred) or a substring of the device
name.
If recording.input is explicitly set and cannot be resolved, startup fails
instead of falling back to a default device.
Config validation is strict: unknown fields are rejected with a startup error. Validation errors include the exact field and an example fix snippet.
Profile options:
ux.profile=default: baseline cleanup behavior.ux.profile=fast: lower-latency AI generation settings.ux.profile=polished: same cleanup depth as default.safety.enabled=true: enables fact-preservation checks (names/numbers/IDs/URLs).safety.strict=false: fallback to safer draft when fact checks fail.safety.strict=true: reject output when fact checks fail.advanced.strict_startup=true: keep fail-fast startup validation behavior.
Transcription language:
stt.language=auto(default) enables Whisper auto-detection.- You can pin language with Whisper codes (for example
en,es,pt,ja,zh) or common names likeEnglish/Spanish. - If a pinned language hint is rejected by the runtime, Aman logs a warning and retries with auto-detect.
Hotkey notes:
- Use one key plus optional modifiers (for example
Cmd+m,Super+m,Ctrl+space). SuperandCmdare equivalent aliases for the same modifier.
AI cleanup is always enabled and uses the locked local Qwen2.5-1.5B GGUF model
downloaded to ~/.cache/aman/models/ during daemon initialization.
Prompts are structured with semantic XML tags for both system and user messages
to improve instruction adherence and output consistency.
Cleanup runs in two local passes:
- pass 1 drafts cleaned text and labels ambiguity decisions (correction/literal/spelling/filler)
- pass 2 audits those decisions conservatively and emits final
cleaned_textThis keeps Aman in dictation mode: it does not execute editing instructions embedded in transcript text. Before Aman reportsready, local llama runs a tiny warmup completion so the first real transcription is faster. If warmup fails andadvanced.strict_startup=true, startup fails fast. Withadvanced.strict_startup=false, Aman logs a warning and continues. Model downloads use a network timeout and SHA256 verification before activation. Cached models are checksum-verified on startup; mismatches trigger a forced redownload.
Provider policy:
Aman-managedmode (recommended) is the canonical supported UX: Aman handles model lifecycle and safe defaults for you.Expert modeis opt-in and exposes a custom Whisper model path for advanced users.- Editor model/provider configuration is intentionally not exposed in config.
- Custom Whisper paths are only active with
models.allow_custom_models=true.
Use -v/--verbose to enable DEBUG logs, including recognized/processed
transcript text and llama.cpp logs (llama:: prefix). Without -v, logs are
INFO level.
Vocabulary correction:
vocabulary.replacementsis deterministic correction (from -> to).vocabulary.termsis a preferred spelling list used as hinting context.- Wildcards are intentionally rejected (
*,?,[,],{,}) to avoid ambiguous rules. - Rules are deduplicated case-insensitively; conflicting replacements are rejected.
STT hinting:
- Vocabulary is passed to Whisper as compact
hotwordsonly when that argument is supported by the installedfaster-whisperruntime. - Aman enables
word_timestampswhen supported and runs a conservative alignment heuristic pass (self-correction/restart detection) before the editor stage.
Fact guard:
- Aman runs a deterministic fact-preservation verifier after editor output.
- If facts are changed/invented and
safety.strict=false, Aman falls back to the safer aligned draft. - If facts are changed/invented and
safety.strict=true, processing fails and output is not injected.
systemd user service
make install-service
Service notes:
- The user unit launches
amanfromPATH. - Package installs should provide the
amancommand automatically. - Inspect failures with
systemctl --user status amanandjournalctl --user -u aman -f.
Usage
- Press the hotkey once to start recording.
- Press it again to stop and run STT.
- Press
Escwhile recording to cancel without processing. Escis only captured during active recording.- Recording start is aborted if the cancel listener cannot be armed.
- Transcript contents are logged only when
-v/--verboseis used. - Tray menu includes:
Settings...,Help,About,Pause/Resume Aman,Reload Config,Run Diagnostics,Open Config Path, andQuit. - If required settings are not saved, Aman enters a
Settings Requiredtray mode and does not capture audio.
Wayland note:
- Running under Wayland currently exits with a message explaining that it is not supported yet.
Injection backends:
clipboard: copy to clipboard and inject via Ctrl+Shift+V (GTK clipboard + XTest)injection: type the text with simulated keypresses (XTest)injection.remove_transcription_from_clipboard: whentrueand backend isclipboard, restores/clears the clipboard after paste so the transcript is not kept there
Editor stage:
- Canonical local llama.cpp editor model (managed by Aman).
- Runtime flow is explicit:
ASR -> Alignment Heuristics -> Editor -> Fact Guard -> Vocabulary -> Injection.
Build and packaging (maintainers):
make build
make package
make package-deb
make package-arch
make release-check
make package-deb installs Python dependencies while creating the package.
For offline packaging, set AMAN_WHEELHOUSE_DIR to a directory containing the
required wheels.
Benchmarking (STT bypass, always dry):
aman bench --text "draft a short email to Marta confirming lunch" --repeat 10 --warmup 2
aman bench --text-file ./bench-input.txt --repeat 20 --json
bench does not capture audio and never injects text to desktop apps. It runs
the processing path from input transcript text through alignment/editor/fact-guard/vocabulary cleanup and
prints timing summaries.
Model evaluation lab (dataset + matrix sweep):
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --output benchmarks/results/latest.json
aman sync-default-model --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
eval-models runs a structured model/parameter sweep over a JSONL dataset and
outputs latency + quality metrics (including hybrid score, pass-1/pass-2 latency breakdown,
and correction safety metrics for I mean and spelling-disambiguation cases).
When --heuristic-dataset is provided, the report also includes alignment-heuristic
quality metrics (exact match, token-F1, rule precision/recall, per-tag breakdown).
sync-default-model promotes the report winner to the managed default model constants
using the artifact registry and can be run in --check mode for CI/release gates.
Control:
make run
make run config.example.json
make doctor
make self-check
make eval-models
make sync-default-model
make check-default-model
make check
Developer setup (optional, uv workflow):
uv sync --extra x11
uv run aman run --config ~/.config/aman/config.json
Developer setup (optional, pip workflow):
make install-local
aman run --config ~/.config/aman/config.json
CLI (internal/support fallback):
aman run --config ~/.config/aman/config.json
aman doctor --config ~/.config/aman/config.json --json
aman self-check --config ~/.config/aman/config.json --json
aman bench --text "example transcript" --repeat 5 --warmup 1
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl --json
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --json
aman sync-default-model --check --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py
aman version
aman init --config ~/.config/aman/config.json --force