aman/README.md

175 lines
4.2 KiB
Markdown

# lel
Python X11 STT daemon that records audio, runs Whisper, applies local AI cleanup, and injects text.
## Requirements
- X11 (Wayland support scaffolded but not available yet)
- `sounddevice` (PortAudio)
- `faster-whisper`
- `llama-cpp-python`
- Tray icon deps: `gtk3`, `libayatana-appindicator3`
- Python deps (core): `numpy`, `pillow`, `faster-whisper`, `llama-cpp-python`, `sounddevice`
- X11 extras: `PyGObject`, `python-xlib`
System packages (example names): `portaudio`/`libportaudio2`.
<details>
<summary>Ubuntu (X11)</summary>
```bash
sudo apt install -y portaudio19-dev libportaudio2 python3-gi gir1.2-gtk-3.0 libayatana-appindicator3-1
```
</details>
<details>
<summary>Debian (X11)</summary>
```bash
sudo apt install -y portaudio19-dev libportaudio2 python3-gi gir1.2-gtk-3.0 libayatana-appindicator3-1
```
</details>
<details>
<summary>Arch Linux (X11)</summary>
```bash
sudo pacman -S --needed portaudio gtk3 libayatana-appindicator
```
</details>
<details>
<summary>Fedora (X11)</summary>
```bash
sudo dnf install -y portaudio portaudio-devel gtk3 libayatana-appindicator-gtk3
```
</details>
<details>
<summary>openSUSE (X11)</summary>
```bash
sudo zypper install -y portaudio portaudio-devel gtk3 libayatana-appindicator3-1
```
</details>
## Python Daemon
Install Python deps:
X11 (supported):
```bash
uv sync --extra x11
```
Wayland (scaffold only):
```bash
uv sync --extra wayland
```
Run:
```bash
uv run python3 src/leld.py --config ~/.config/lel/config.json
```
## Config
Create `~/.config/lel/config.json`:
```json
{
"daemon": { "hotkey": "Cmd+m" },
"recording": { "input": "0" },
"stt": { "model": "base", "device": "cpu" },
"injection": { "backend": "clipboard" },
"ai": { "enabled": true },
"logging": { "log_transcript": false },
"vocabulary": {
"replacements": [
{ "from": "Martha", "to": "Marta" },
{ "from": "docker", "to": "Docker" }
],
"terms": ["Systemd", "Kubernetes"],
"max_rules": 500,
"max_terms": 500
},
"domain_inference": { "enabled": true, "mode": "auto" }
}
```
Recording input can be a device index (preferred) or a substring of the device
name.
`ai.enabled` is accepted for compatibility but currently has no runtime effect.
AI cleanup is always enabled and uses the locked local Llama-3.2-3B GGUF model
downloaded to `~/.cache/lel/models/` on first use.
`logging.log_transcript` controls whether recognized/processed text is written
to logs. This is disabled by default. `-v/--verbose` also enables transcript
logging and llama.cpp logs; llama logs are prefixed with `llama::`.
Vocabulary correction:
- `vocabulary.replacements` is deterministic correction (`from -> to`).
- `vocabulary.terms` is a preferred spelling list used as hinting context.
- Wildcards are intentionally rejected (`*`, `?`, `[`, `]`, `{`, `}`) to avoid ambiguous rules.
- Rules are deduplicated case-insensitively; conflicting replacements are rejected.
- Limits are bounded by `max_rules` and `max_terms`.
Domain inference:
- `domain_inference.mode` currently supports `auto`.
- Domain context is advisory only and is used to improve cleanup prompts.
- When confidence is low, it falls back to `general` context.
STT hinting:
- Vocabulary is passed to Whisper as `hotwords`/`initial_prompt` only when those
arguments are supported by the installed `faster-whisper` runtime.
## systemd user service
```bash
mkdir -p ~/.local/share/lel/src/assets
cp src/*.py ~/.local/share/lel/src/
cp src/assets/*.png ~/.local/share/lel/src/assets/
cp systemd/lel.service ~/.config/systemd/user/lel.service
systemctl --user daemon-reload
systemctl --user enable --now lel
```
## Usage
- Press the hotkey once to start recording.
- Press it again to stop and run STT.
- Press `Esc` while recording to cancel without processing.
- Transcript contents are logged only when `logging.log_transcript` is enabled or `-v/--verbose` is used.
Wayland note:
- Running under Wayland currently exits with a message explaining that it is not supported yet.
Injection backends:
- `clipboard`: copy to clipboard and inject via Ctrl+Shift+V (GTK clipboard + XTest)
- `injection`: type the text with simulated keypresses (XTest)
AI processing:
- Local llama.cpp model only (no remote provider configuration).
Control:
```bash
make run
make check
```