Refine config and runtime flow

This commit is contained in:
Thales Maciel 2026-02-24 14:15:17 -03:00
parent 85e082dd46
commit b3be444625
No known key found for this signature in database
GPG key ID: 33112E6833C34679
16 changed files with 642 additions and 137 deletions

View file

@ -1,6 +1,6 @@
# lel
Python X11 STT daemon that records audio, runs Whisper, logs the transcript, and can optionally run AI post-processing before injecting text.
Python X11 STT daemon that records audio, runs Whisper, and injects text. It can optionally run local AI post-processing before injection.
## Requirements
@ -9,7 +9,7 @@ Python X11 STT daemon that records audio, runs Whisper, logs the transcript, and
- `faster-whisper`
- `llama-cpp-python`
- Tray icon deps: `gtk3`, `libayatana-appindicator3`
- Python deps (core): `pillow`, `faster-whisper`, `llama-cpp-python`, `sounddevice`
- Python deps (core): `numpy`, `pillow`, `faster-whisper`, `llama-cpp-python`, `sounddevice`
- X11 extras: `PyGObject`, `python-xlib`
System packages (example names): `portaudio`/`libportaudio2`.
@ -90,23 +90,29 @@ Create `~/.config/lel/config.json`:
"daemon": { "hotkey": "Cmd+m" },
"recording": { "input": "0" },
"stt": { "model": "base", "device": "cpu" },
"injection": { "backend": "clipboard" }
"injection": { "backend": "clipboard" },
"ai": { "enabled": true },
"logging": { "log_transcript": false }
}
```
Recording input can be a device index (preferred) or a substring of the device
name.
The LLM model is downloaded on first startup to `~/.cache/lel/models/` and uses
the locked Llama-3.2-3B GGUF model.
Pass `-v/--verbose` to see verbose logs, including llama.cpp loader logs; these
messages are prefixed with `llama::`.
`ai.enabled` controls local cleanup. When enabled, the LLM model is downloaded
on first use to `~/.cache/lel/models/` and uses the locked Llama-3.2-3B GGUF
model.
`logging.log_transcript` controls whether recognized/processed text is written
to logs. This is disabled by default. `-v/--verbose` also enables transcript
logging and llama.cpp logs; llama logs are prefixed with `llama::`.
## systemd user service
```bash
mkdir -p ~/.local/bin
cp src/leld.py ~/.local/bin/leld.py
mkdir -p ~/.local/share/lel/src/assets
cp src/*.py ~/.local/share/lel/src/
cp src/assets/*.png ~/.local/share/lel/src/assets/
cp systemd/lel.service ~/.config/systemd/user/lel.service
systemctl --user daemon-reload
systemctl --user enable --now lel
@ -116,7 +122,7 @@ systemctl --user enable --now lel
- Press the hotkey once to start recording.
- Press it again to stop and run STT.
- The transcript is logged to stderr.
- Transcript contents are logged only when `logging.log_transcript` is enabled or `-v/--verbose` is used.
Wayland note:
@ -127,12 +133,13 @@ Injection backends:
- `clipboard`: copy to clipboard and inject via Ctrl+Shift+V (GTK clipboard + XTest)
- `injection`: type the text with simulated keypresses (XTest)
AI provider:
AI processing:
- Generic OpenAI-compatible chat API at `ai_base_url` (base URL only; the app uses `/v1/chat/completions`)
- Local llama.cpp model only (no remote provider configuration).
Control:
```bash
make run
make check
```