| .github/workflows | ||
| docs | ||
| src | ||
| systemd | ||
| tests | ||
| .gitignore | ||
| AGENTS.md | ||
| CHANGELOG.md | ||
| config.example.json | ||
| Makefile | ||
| pyproject.toml | ||
| README.md | ||
| uv.lock | ||
aman
Local amanuensis
Python X11 STT daemon that records audio, runs Whisper, applies local AI cleanup, and injects text.
Requirements
- X11
sounddevice(PortAudio)faster-whisperllama-cpp-python- Tray icon deps:
gtk3,libayatana-appindicator3 - Python deps (core):
numpy,pillow,faster-whisper,llama-cpp-python,sounddevice - X11 extras:
PyGObject,python-xlib
System packages (example names): portaudio/libportaudio2.
Ubuntu/Debian
sudo apt install -y portaudio19-dev libportaudio2 python3-gi gir1.2-gtk-3.0 libayatana-appindicator3-1
Arch Linux
sudo pacman -S --needed portaudio gtk3 libayatana-appindicator
Fedora
sudo dnf install -y portaudio portaudio-devel gtk3 libayatana-appindicator-gtk3
openSUSE
sudo zypper install -y portaudio portaudio-devel gtk3 libayatana-appindicator3-1
Python Daemon
Install Python deps:
X11 (supported):
uv sync --extra x11
Quickstart
uv run aman run
On first launch, Aman opens a graphical settings window automatically. It includes sections for:
- microphone input
- hotkey
- output backend
- writing profile
- runtime and model strategy
- help/about actions
Config
Create ~/.config/aman/config.json (or let aman create it automatically on first start if missing):
{
"config_version": 1,
"daemon": { "hotkey": "Cmd+m" },
"recording": { "input": "0" },
"stt": {
"provider": "local_whisper",
"model": "base",
"device": "cpu",
"language": "auto"
},
"llm": { "provider": "local_llama" },
"models": {
"allow_custom_models": false,
"whisper_model_path": "",
"llm_model_path": ""
},
"external_api": {
"enabled": false,
"provider": "openai",
"base_url": "https://api.openai.com/v1",
"model": "gpt-4o-mini",
"timeout_ms": 15000,
"max_retries": 2,
"api_key_env_var": "AMAN_EXTERNAL_API_KEY"
},
"injection": {
"backend": "clipboard",
"remove_transcription_from_clipboard": false
},
"ux": {
"profile": "default",
"show_notifications": true
},
"advanced": {
"strict_startup": true
},
"vocabulary": {
"replacements": [
{ "from": "Martha", "to": "Marta" },
{ "from": "docker", "to": "Docker" }
],
"terms": ["Systemd", "Kubernetes"]
}
}
config_version is required and currently must be 1. Legacy unversioned
configs are migrated automatically on load.
Recording input can be a device index (preferred) or a substring of the device
name.
If recording.input is explicitly set and cannot be resolved, startup fails
instead of falling back to a default device.
Config validation is strict: unknown fields are rejected with a startup error. Validation errors include the exact field and an example fix snippet.
Profile options:
ux.profile=default: baseline cleanup behavior.ux.profile=fast: lower-latency AI generation settings.ux.profile=polished: same cleanup depth as default.advanced.strict_startup=true: keep fail-fast startup validation behavior.
Transcription language:
stt.language=auto(default) enables Whisper auto-detection.- You can pin language with Whisper codes (for example
en,es,pt,ja,zh) or common names likeEnglish/Spanish. - If a pinned language hint is rejected by the runtime, Aman logs a warning and retries with auto-detect.
Hotkey notes:
- Use one key plus optional modifiers (for example
Cmd+m,Super+m,Ctrl+space). SuperandCmdare equivalent aliases for the same modifier.
AI cleanup is always enabled and uses the locked local Llama-3.2-3B GGUF model
downloaded to ~/.cache/aman/models/ during daemon initialization.
Model downloads use a network timeout and SHA256 verification before activation.
Cached models are checksum-verified on startup; mismatches trigger a forced
redownload.
Provider policy:
Aman-managedmode (recommended) is the canonical supported UX: Aman handles model lifecycle and safe defaults for you.Expert modeis opt-in and exposes custom providers/models for advanced users.- External API auth is environment-variable based (
external_api.api_key_env_var); no API key is stored in config. - Custom local model paths are only active with
models.allow_custom_models=true.
Use -v/--verbose to enable DEBUG logs, including recognized/processed
transcript text and llama.cpp logs (llama:: prefix). Without -v, logs are
INFO level.
Vocabulary correction:
vocabulary.replacementsis deterministic correction (from -> to).vocabulary.termsis a preferred spelling list used as hinting context.- Wildcards are intentionally rejected (
*,?,[,],{,}) to avoid ambiguous rules. - Rules are deduplicated case-insensitively; conflicting replacements are rejected.
STT hinting:
- Vocabulary is passed to Whisper as
hotwords/initial_promptonly when those arguments are supported by the installedfaster-whisperruntime.
systemd user service
uv pip install --user .
cp systemd/aman.service ~/.config/systemd/user/aman.service
systemctl --user daemon-reload
systemctl --user enable --now aman
Service notes:
- The user unit launches
amanfromPATH; ensure~/.local/binis present in your user PATH. - Inspect failures with
systemctl --user status amanandjournalctl --user -u aman -f.
Usage
- Press the hotkey once to start recording.
- Press it again to stop and run STT.
- Press
Escwhile recording to cancel without processing. Escis only captured during active recording.- Recording start is aborted if the cancel listener cannot be armed.
- Transcript contents are logged only when
-v/--verboseis used. - Tray menu includes:
Settings...,Help,About,Pause/Resume Aman,Reload Config,Run Diagnostics,Open Config Path, andQuit. - If required settings are not saved, Aman enters a
Settings Requiredtray mode and does not capture audio.
Wayland note:
- Running under Wayland currently exits with a message explaining that it is not supported yet.
Injection backends:
clipboard: copy to clipboard and inject via Ctrl+Shift+V (GTK clipboard + XTest)injection: type the text with simulated keypresses (XTest)injection.remove_transcription_from_clipboard: whentrueand backend isclipboard, restores/clears the clipboard after paste so the transcript is not kept there
AI processing:
- Default local llama.cpp model.
- Optional external API provider through
llm.provider=external_api.
Control:
make run
make doctor
make self-check
make check
CLI (internal/support fallback, mostly for automation/tests):
uv run aman run --config ~/.config/aman/config.json
uv run aman doctor --config ~/.config/aman/config.json --json
uv run aman self-check --config ~/.config/aman/config.json --json
uv run aman version
uv run aman init --config ~/.config/aman/config.json --force