# lel
Python X11 STT daemon that records audio, runs Whisper, and injects text. It can optionally run local AI post-processing before injection.
## Requirements
- X11 (Wayland support scaffolded but not available yet)
- `sounddevice` (PortAudio)
- `faster-whisper`
- `llama-cpp-python`
- Tray icon deps: `gtk3`, `libayatana-appindicator3`
- Python deps (core): `numpy`, `pillow`, `faster-whisper`, `llama-cpp-python`, `sounddevice`
- X11 extras: `PyGObject`, `python-xlib`
System packages (example names): `portaudio`/`libportaudio2`.
Ubuntu (X11)
```bash
sudo apt install -y portaudio19-dev libportaudio2 python3-gi gir1.2-gtk-3.0 libayatana-appindicator3-1
```
Debian (X11)
```bash
sudo apt install -y portaudio19-dev libportaudio2 python3-gi gir1.2-gtk-3.0 libayatana-appindicator3-1
```
Arch Linux (X11)
```bash
sudo pacman -S --needed portaudio gtk3 libayatana-appindicator
```
Fedora (X11)
```bash
sudo dnf install -y portaudio portaudio-devel gtk3 libayatana-appindicator-gtk3
```
openSUSE (X11)
```bash
sudo zypper install -y portaudio portaudio-devel gtk3 libayatana-appindicator3-1
```
## Python Daemon
Install Python deps:
X11 (supported):
```bash
uv sync --extra x11
```
Wayland (scaffold only):
```bash
uv sync --extra wayland
```
Run:
```bash
uv run python3 src/leld.py --config ~/.config/lel/config.json
```
## Config
Create `~/.config/lel/config.json`:
```json
{
"daemon": { "hotkey": "Cmd+m" },
"recording": { "input": "0" },
"stt": { "model": "base", "device": "cpu" },
"injection": { "backend": "clipboard" },
"ai": { "enabled": true },
"logging": { "log_transcript": false }
}
```
Recording input can be a device index (preferred) or a substring of the device
name.
`ai.enabled` controls local cleanup. When enabled, the LLM model is downloaded
on first use to `~/.cache/lel/models/` and uses the locked Llama-3.2-3B GGUF
model.
`logging.log_transcript` controls whether recognized/processed text is written
to logs. This is disabled by default. `-v/--verbose` also enables transcript
logging and llama.cpp logs; llama logs are prefixed with `llama::`.
## systemd user service
```bash
mkdir -p ~/.local/share/lel/src/assets
cp src/*.py ~/.local/share/lel/src/
cp src/assets/*.png ~/.local/share/lel/src/assets/
cp systemd/lel.service ~/.config/systemd/user/lel.service
systemctl --user daemon-reload
systemctl --user enable --now lel
```
## Usage
- Press the hotkey once to start recording.
- Press it again to stop and run STT.
- Transcript contents are logged only when `logging.log_transcript` is enabled or `-v/--verbose` is used.
Wayland note:
- Running under Wayland currently exits with a message explaining that it is not supported yet.
Injection backends:
- `clipboard`: copy to clipboard and inject via Ctrl+Shift+V (GTK clipboard + XTest)
- `injection`: type the text with simulated keypresses (XTest)
AI processing:
- Local llama.cpp model only (no remote provider configuration).
Control:
```bash
make run
make check
```