Compare commits

...
Sign in to create a new pull request.

3 commits

Author SHA1 Message Date
c4433e5a20
Preserve alignment edits without ASR words
Some checks failed
ci / test-and-build (push) Has been cancelled
Keep transcript-only runs eligible for alignment heuristics instead of bailing out when the ASR stage does not supply word timings.

Build fallback AsrWord entries from the transcript so cue-based corrections like "i mean" still apply, while reusing the existing literal guard for verbatim phrases.

Cover the new path in alignment and pipeline tests, and validate with python3 -m unittest tests.test_alignment_edits tests.test_pipeline_engine.
2026-03-11 13:50:07 -03:00
8169db98f4 Add NATO single-word dataset scaffold 2026-02-28 17:37:39 -03:00
510d280b74 Add Vosk keystroke eval tooling and findings 2026-02-28 17:20:09 -03:00
20 changed files with 2336 additions and 7 deletions

View file

@ -294,6 +294,65 @@ aman bench --text-file ./bench-input.txt --repeat 20 --json
the processing path from input transcript text through alignment/editor/fact-guard/vocabulary cleanup and
prints timing summaries.
Internal Vosk exploration (fixed-phrase dataset collection):
```bash
aman collect-fixed-phrases \
--phrases-file exploration/vosk/fixed_phrases/phrases.txt \
--out-dir exploration/vosk/fixed_phrases \
--samples-per-phrase 10
```
This internal command prompts each allowed phrase and records labeled WAV
samples with manual start/stop (Enter to start, Enter to stop). It does not run
Vosk decoding and does not execute desktop commands. Output includes:
- `exploration/vosk/fixed_phrases/samples/`
- `exploration/vosk/fixed_phrases/manifest.jsonl`
Internal Vosk exploration (keystroke dictation: literal vs NATO):
```bash
# collect literal-key dataset
aman collect-fixed-phrases \
--phrases-file exploration/vosk/keystrokes/literal/phrases.txt \
--out-dir exploration/vosk/keystrokes/literal \
--samples-per-phrase 10
# collect NATO-key dataset
aman collect-fixed-phrases \
--phrases-file exploration/vosk/keystrokes/nato/phrases.txt \
--out-dir exploration/vosk/keystrokes/nato \
--samples-per-phrase 10
# evaluate both grammars across available Vosk models
aman eval-vosk-keystrokes \
--literal-manifest exploration/vosk/keystrokes/literal/manifest.jsonl \
--nato-manifest exploration/vosk/keystrokes/nato/manifest.jsonl \
--intents exploration/vosk/keystrokes/intents.json \
--output-dir exploration/vosk/keystrokes/eval_runs \
--models-file exploration/vosk/keystrokes/models.example.json
```
`eval-vosk-keystrokes` writes a structured report (`summary.json`) with:
- intent accuracy and unknown-rate by grammar
- per-intent/per-letter confusion tables
- latency (avg/p50/p95), RTF, and model-load time
- strict grammar compliance checks (out-of-grammar hypotheses hard-fail the model run)
Internal Vosk exploration (single NATO words):
```bash
aman collect-fixed-phrases \
--phrases-file exploration/vosk/nato_words/phrases.txt \
--out-dir exploration/vosk/nato_words \
--samples-per-phrase 10
```
This prepares a labeled dataset for per-word NATO recognition (26 words, one
word per prompt). Output includes:
- `exploration/vosk/nato_words/samples/`
- `exploration/vosk/nato_words/manifest.jsonl`
Model evaluation lab (dataset + matrix sweep):
```bash
@ -344,6 +403,9 @@ aman run --config ~/.config/aman/config.json
aman doctor --config ~/.config/aman/config.json --json
aman self-check --config ~/.config/aman/config.json --json
aman bench --text "example transcript" --repeat 5 --warmup 1
aman collect-fixed-phrases --phrases-file exploration/vosk/fixed_phrases/phrases.txt --out-dir exploration/vosk/fixed_phrases --samples-per-phrase 10
aman collect-fixed-phrases --phrases-file exploration/vosk/nato_words/phrases.txt --out-dir exploration/vosk/nato_words --samples-per-phrase 10
aman eval-vosk-keystrokes --literal-manifest exploration/vosk/keystrokes/literal/manifest.jsonl --nato-manifest exploration/vosk/keystrokes/nato/manifest.jsonl --intents exploration/vosk/keystrokes/intents.json --output-dir exploration/vosk/keystrokes/eval_runs --json
aman build-heuristic-dataset --input benchmarks/heuristics_dataset.raw.jsonl --output benchmarks/heuristics_dataset.jsonl --json
aman eval-models --dataset benchmarks/cleanup_dataset.jsonl --matrix benchmarks/model_matrix.small_first.json --heuristic-dataset benchmarks/heuristics_dataset.jsonl --heuristic-weight 0.25 --json
aman sync-default-model --check --report benchmarks/results/latest.json --artifacts benchmarks/model_artifacts.json --constants src/constants.py

View file

@ -0,0 +1,5 @@
literal/manifest.jsonl
literal/samples/
nato/manifest.jsonl
nato/samples/
eval_runs/

View file

@ -0,0 +1,31 @@
# Vosk Keystroke Grammar Findings
- Date (UTC): 2026-02-28
- Run ID: `run-20260228T200047Z`
- Dataset size:
- Literal grammar: 90 samples
- NATO grammar: 90 samples
- Intents: 9 (`ctrl|shift|ctrl+shift` x `d|b|p`)
## Results
| Model | Literal intent accuracy | NATO intent accuracy | Literal p50 | NATO p50 |
|---|---:|---:|---:|---:|
| `vosk-small-en-us-0.15` | 71.11% | 100.00% | 26.07 ms | 26.38 ms |
| `vosk-en-us-0.22-lgraph` | 74.44% | 100.00% | 210.34 ms | 214.97 ms |
## Main Error Pattern (Literal Grammar)
- Letter confusion is concentrated on `p -> b`:
- `control p -> control b`
- `shift p -> shift b`
- `control shift p -> control shift b`
## Takeaways
- NATO grammar is strongly validated for this keystroke use case (100% on both tested models).
- `vosk-small-en-us-0.15` is the practical default for command-keystroke experiments because it matches NATO accuracy while being much faster.
## Raw Report
- `exploration/vosk/keystrokes/eval_runs/run-20260228T200047Z/summary.json`

View file

@ -0,0 +1,65 @@
[
{
"intent_id": "ctrl+d",
"literal_phrase": "control d",
"nato_phrase": "control delta",
"letter": "d",
"modifier": "ctrl"
},
{
"intent_id": "ctrl+b",
"literal_phrase": "control b",
"nato_phrase": "control bravo",
"letter": "b",
"modifier": "ctrl"
},
{
"intent_id": "ctrl+p",
"literal_phrase": "control p",
"nato_phrase": "control papa",
"letter": "p",
"modifier": "ctrl"
},
{
"intent_id": "shift+d",
"literal_phrase": "shift d",
"nato_phrase": "shift delta",
"letter": "d",
"modifier": "shift"
},
{
"intent_id": "shift+b",
"literal_phrase": "shift b",
"nato_phrase": "shift bravo",
"letter": "b",
"modifier": "shift"
},
{
"intent_id": "shift+p",
"literal_phrase": "shift p",
"nato_phrase": "shift papa",
"letter": "p",
"modifier": "shift"
},
{
"intent_id": "ctrl+shift+d",
"literal_phrase": "control shift d",
"nato_phrase": "control shift delta",
"letter": "d",
"modifier": "ctrl+shift"
},
{
"intent_id": "ctrl+shift+b",
"literal_phrase": "control shift b",
"nato_phrase": "control shift bravo",
"letter": "b",
"modifier": "ctrl+shift"
},
{
"intent_id": "ctrl+shift+p",
"literal_phrase": "control shift p",
"nato_phrase": "control shift papa",
"letter": "p",
"modifier": "ctrl+shift"
}
]

View file

@ -0,0 +1,11 @@
# Keystroke literal grammar labels.
# One phrase per line.
control d
control b
control p
shift d
shift b
shift p
control shift d
control shift b
control shift p

View file

@ -0,0 +1,10 @@
[
{
"name": "vosk-small-en-us-0.15",
"path": "/tmp/vosk-models/vosk-model-small-en-us-0.15"
},
{
"name": "vosk-en-us-0.22-lgraph",
"path": "/tmp/vosk-models/vosk-model-en-us-0.22-lgraph"
}
]

View file

@ -0,0 +1,11 @@
# Keystroke NATO grammar labels.
# One phrase per line.
control delta
control bravo
control papa
shift delta
shift bravo
shift papa
control shift delta
control shift bravo
control shift papa

View file

@ -0,0 +1,3 @@
manifest.jsonl
samples/
eval_runs/

View file

@ -0,0 +1,28 @@
# NATO alphabet single-word grammar labels.
# One phrase per line.
alpha
bravo
charlie
delta
echo
foxtrot
golf
hotel
india
juliett
kilo
lima
mike
november
oscar
papa
quebec
romeo
sierra
tango
uniform
victor
whiskey
x-ray
yankee
zulu

View file

@ -14,6 +14,7 @@ dependencies = [
"numpy",
"pillow",
"sounddevice",
"vosk>=0.3.45",
]
[project.scripts]
@ -44,6 +45,8 @@ py-modules = [
"model_eval",
"recorder",
"vocabulary",
"vosk_collect",
"vosk_eval",
]
[tool.setuptools.data-files]

View file

@ -36,6 +36,22 @@ from recorder import stop_recording as stop_audio_recording
from stages.asr_whisper import AsrResult, WhisperAsrStage
from stages.editor_llama import LlamaEditorStage
from vocabulary import VocabularyEngine
from vosk_collect import (
DEFAULT_CHANNELS,
DEFAULT_FIXED_PHRASES_OUT_DIR,
DEFAULT_FIXED_PHRASES_PATH,
DEFAULT_SAMPLE_RATE,
DEFAULT_SAMPLES_PER_PHRASE,
CollectOptions,
collect_fixed_phrases,
)
from vosk_eval import (
DEFAULT_KEYSTROKE_EVAL_OUTPUT_DIR,
DEFAULT_KEYSTROKE_INTENTS_PATH,
DEFAULT_KEYSTROKE_LITERAL_MANIFEST_PATH,
DEFAULT_KEYSTROKE_NATO_MANIFEST_PATH,
run_vosk_keystroke_eval,
)
class State:
@ -981,6 +997,88 @@ def _build_parser() -> argparse.ArgumentParser:
)
bench_parser.add_argument("-v", "--verbose", action="store_true", help="enable verbose logs")
collect_parser = subparsers.add_parser(
"collect-fixed-phrases",
help="internal: collect labeled fixed-phrase wav samples for command-stt exploration",
)
collect_parser.add_argument(
"--phrases-file",
default=str(DEFAULT_FIXED_PHRASES_PATH),
help="path to fixed-phrase labels file (one phrase per line)",
)
collect_parser.add_argument(
"--out-dir",
default=str(DEFAULT_FIXED_PHRASES_OUT_DIR),
help="output directory for samples/ and manifest.jsonl",
)
collect_parser.add_argument(
"--samples-per-phrase",
type=int,
default=DEFAULT_SAMPLES_PER_PHRASE,
help="number of recordings to capture per phrase",
)
collect_parser.add_argument(
"--samplerate",
type=int,
default=DEFAULT_SAMPLE_RATE,
help="sample rate for captured wav files",
)
collect_parser.add_argument(
"--channels",
type=int,
default=DEFAULT_CHANNELS,
help="number of input channels to capture",
)
collect_parser.add_argument(
"--device",
default="",
help="optional recording device index or name substring",
)
collect_parser.add_argument(
"--session-id",
default="",
help="optional session id; autogenerated when omitted",
)
collect_parser.add_argument(
"--overwrite-session",
action="store_true",
help="allow writing samples for an existing session id",
)
collect_parser.add_argument("--json", action="store_true", help="print JSON summary output")
collect_parser.add_argument("-v", "--verbose", action="store_true", help="enable verbose logs")
keystroke_eval_parser = subparsers.add_parser(
"eval-vosk-keystrokes",
help="internal: evaluate keystroke dictation datasets with literal and nato grammars",
)
keystroke_eval_parser.add_argument(
"--literal-manifest",
default=str(DEFAULT_KEYSTROKE_LITERAL_MANIFEST_PATH),
help="path to literal keystroke manifest.jsonl",
)
keystroke_eval_parser.add_argument(
"--nato-manifest",
default=str(DEFAULT_KEYSTROKE_NATO_MANIFEST_PATH),
help="path to nato keystroke manifest.jsonl",
)
keystroke_eval_parser.add_argument(
"--intents",
default=str(DEFAULT_KEYSTROKE_INTENTS_PATH),
help="path to keystroke intents definition json",
)
keystroke_eval_parser.add_argument(
"--output-dir",
default=str(DEFAULT_KEYSTROKE_EVAL_OUTPUT_DIR),
help="directory for run reports",
)
keystroke_eval_parser.add_argument(
"--models-file",
default="",
help="optional json array of model specs [{name,path}]",
)
keystroke_eval_parser.add_argument("--json", action="store_true", help="print JSON summary output")
keystroke_eval_parser.add_argument("-v", "--verbose", action="store_true", help="enable verbose logs")
eval_parser = subparsers.add_parser(
"eval-models",
help="evaluate model/parameter matrices against expected outputs",
@ -1059,6 +1157,8 @@ def _parse_cli_args(argv: list[str]) -> argparse.Namespace:
"doctor",
"self-check",
"bench",
"collect-fixed-phrases",
"eval-vosk-keystrokes",
"eval-models",
"build-heuristic-dataset",
"sync-default-model",
@ -1255,6 +1355,120 @@ def _bench_command(args: argparse.Namespace) -> int:
return 0
def _collect_fixed_phrases_command(args: argparse.Namespace) -> int:
if args.samples_per_phrase < 1:
logging.error("collect-fixed-phrases failed: --samples-per-phrase must be >= 1")
return 1
if args.samplerate < 1:
logging.error("collect-fixed-phrases failed: --samplerate must be >= 1")
return 1
if args.channels < 1:
logging.error("collect-fixed-phrases failed: --channels must be >= 1")
return 1
options = CollectOptions(
phrases_file=Path(args.phrases_file),
out_dir=Path(args.out_dir),
samples_per_phrase=args.samples_per_phrase,
samplerate=args.samplerate,
channels=args.channels,
device_spec=(args.device.strip() if args.device.strip() else None),
session_id=(args.session_id.strip() if args.session_id.strip() else None),
overwrite_session=bool(args.overwrite_session),
)
try:
result = collect_fixed_phrases(options)
except Exception as exc:
logging.error("collect-fixed-phrases failed: %s", exc)
return 1
summary = {
"session_id": result.session_id,
"phrases": result.phrases,
"samples_per_phrase": result.samples_per_phrase,
"samples_target": result.samples_target,
"samples_written": result.samples_written,
"out_dir": str(result.out_dir),
"manifest_path": str(result.manifest_path),
"interrupted": result.interrupted,
}
if args.json:
print(json.dumps(summary, indent=2, ensure_ascii=False))
else:
print(
"collect-fixed-phrases summary: "
f"session={result.session_id} "
f"phrases={result.phrases} "
f"samples_per_phrase={result.samples_per_phrase} "
f"written={result.samples_written}/{result.samples_target} "
f"interrupted={result.interrupted} "
f"manifest={result.manifest_path}"
)
return 0
def _eval_vosk_keystrokes_command(args: argparse.Namespace) -> int:
try:
summary = run_vosk_keystroke_eval(
literal_manifest=args.literal_manifest,
nato_manifest=args.nato_manifest,
intents_path=args.intents,
output_dir=args.output_dir,
models_file=(args.models_file.strip() or None),
verbose=args.verbose,
)
except Exception as exc:
logging.error("eval-vosk-keystrokes failed: %s", exc)
return 1
if args.json:
print(json.dumps(summary, indent=2, ensure_ascii=False))
return 0
print(
"eval-vosk-keystrokes summary: "
f"models={len(summary.get('models', []))} "
f"output_dir={summary.get('output_dir', '')}"
)
winners = summary.get("winners", {})
literal_winner = winners.get("literal", {})
nato_winner = winners.get("nato", {})
overall_winner = winners.get("overall", {})
if literal_winner:
print(
"winner[literal]: "
f"{literal_winner.get('name', '')} "
f"acc={float(literal_winner.get('intent_accuracy', 0.0)):.3f} "
f"p50={float(literal_winner.get('latency_p50_ms', 0.0)):.1f}ms"
)
if nato_winner:
print(
"winner[nato]: "
f"{nato_winner.get('name', '')} "
f"acc={float(nato_winner.get('intent_accuracy', 0.0)):.3f} "
f"p50={float(nato_winner.get('latency_p50_ms', 0.0)):.1f}ms"
)
if overall_winner:
print(
"winner[overall]: "
f"{overall_winner.get('name', '')} "
f"acc={float(overall_winner.get('avg_intent_accuracy', 0.0)):.3f} "
f"p50={float(overall_winner.get('avg_latency_p50_ms', 0.0)):.1f}ms"
)
for model in summary.get("models", []):
literal = model.get("literal", {})
nato = model.get("nato", {})
print(
f"{model.get('name', '')}: "
f"literal_acc={float(literal.get('intent_accuracy', 0.0)):.3f} "
f"literal_p50={float(literal.get('latency_ms', {}).get('p50', 0.0)):.1f}ms "
f"nato_acc={float(nato.get('intent_accuracy', 0.0)):.3f} "
f"nato_p50={float(nato.get('latency_ms', {}).get('p50', 0.0)):.1f}ms"
)
return 0
def _eval_models_command(args: argparse.Namespace) -> int:
try:
report = run_model_eval(
@ -1597,6 +1811,12 @@ def main(argv: list[str] | None = None) -> int:
if args.command == "bench":
_configure_logging(args.verbose)
return _bench_command(args)
if args.command == "collect-fixed-phrases":
_configure_logging(args.verbose)
return _collect_fixed_phrases_command(args)
if args.command == "eval-vosk-keystrokes":
_configure_logging(args.verbose)
return _eval_vosk_keystrokes_command(args)
if args.command == "eval-models":
_configure_logging(args.verbose)
return _eval_models_command(args)

View file

@ -33,7 +33,7 @@ class AlignmentResult:
class AlignmentHeuristicEngine:
def apply(self, transcript: str, words: list[AsrWord]) -> AlignmentResult:
base_text = (transcript or "").strip()
if not base_text or not words:
if not base_text:
return AlignmentResult(
draft_text=base_text,
decisions=[],
@ -41,17 +41,26 @@ class AlignmentHeuristicEngine:
skipped_count=0,
)
normalized_words = [_normalize_token(word.text) for word in words]
working_words = list(words) if words else _fallback_words_from_transcript(base_text)
if not working_words:
return AlignmentResult(
draft_text=base_text,
decisions=[],
applied_count=0,
skipped_count=0,
)
normalized_words = [_normalize_token(word.text) for word in working_words]
literal_guard = _has_literal_guard(base_text)
out_tokens: list[str] = []
decisions: list[AlignmentDecision] = []
i = 0
while i < len(words):
cue = _match_cue(words, normalized_words, i)
while i < len(working_words):
cue = _match_cue(working_words, normalized_words, i)
if cue is not None and out_tokens:
cue_len, cue_label = cue
correction_start = i + cue_len
correction_end = _capture_phrase_end(words, correction_start)
correction_end = _capture_phrase_end(working_words, correction_start)
if correction_end <= correction_start:
decisions.append(
AlignmentDecision(
@ -65,7 +74,7 @@ class AlignmentHeuristicEngine:
)
i += cue_len
continue
correction_tokens = _slice_clean_words(words, correction_start, correction_end)
correction_tokens = _slice_clean_words(working_words, correction_start, correction_end)
if not correction_tokens:
i = correction_end
continue
@ -113,7 +122,7 @@ class AlignmentHeuristicEngine:
i = correction_end
continue
token = _strip_token(words[i].text)
token = _strip_token(working_words[i].text)
if token:
out_tokens.append(token)
i += 1
@ -296,3 +305,23 @@ def _has_literal_guard(text: str) -> bool:
"quote",
)
return any(guard in normalized for guard in guards)
def _fallback_words_from_transcript(text: str) -> list[AsrWord]:
tokens = [item for item in (text or "").split() if item.strip()]
if not tokens:
return []
words: list[AsrWord] = []
start = 0.0
step = 0.15
for token in tokens:
words.append(
AsrWord(
text=token,
start_s=start,
end_s=start + 0.1,
prob=None,
)
)
start += step
return words

329
src/vosk_collect.py Normal file
View file

@ -0,0 +1,329 @@
from __future__ import annotations
import json
import re
import wave
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Callable
import numpy as np
from recorder import list_input_devices, resolve_input_device
DEFAULT_FIXED_PHRASES_PATH = Path("exploration/vosk/fixed_phrases/phrases.txt")
DEFAULT_FIXED_PHRASES_OUT_DIR = Path("exploration/vosk/fixed_phrases")
DEFAULT_SAMPLES_PER_PHRASE = 10
DEFAULT_SAMPLE_RATE = 16000
DEFAULT_CHANNELS = 1
COLLECTOR_VERSION = "fixed-phrases-v1"
@dataclass
class CollectOptions:
phrases_file: Path = DEFAULT_FIXED_PHRASES_PATH
out_dir: Path = DEFAULT_FIXED_PHRASES_OUT_DIR
samples_per_phrase: int = DEFAULT_SAMPLES_PER_PHRASE
samplerate: int = DEFAULT_SAMPLE_RATE
channels: int = DEFAULT_CHANNELS
device_spec: str | int | None = None
session_id: str | None = None
overwrite_session: bool = False
@dataclass
class CollectResult:
session_id: str
phrases: int
samples_per_phrase: int
samples_target: int
samples_written: int
out_dir: Path
manifest_path: Path
interrupted: bool
def load_phrases(path: Path | str) -> list[str]:
phrases_path = Path(path)
if not phrases_path.exists():
raise RuntimeError(f"phrases file does not exist: {phrases_path}")
rows = phrases_path.read_text(encoding="utf-8").splitlines()
phrases: list[str] = []
seen: set[str] = set()
for raw in rows:
text = raw.strip()
if not text or text.startswith("#"):
continue
if text in seen:
continue
seen.add(text)
phrases.append(text)
if not phrases:
raise RuntimeError(f"phrases file has no usable labels: {phrases_path}")
return phrases
def slugify_phrase(value: str) -> str:
slug = re.sub(r"[^a-z0-9]+", "_", value.casefold()).strip("_")
if not slug:
return "phrase"
return slug[:64]
def float_to_pcm16(audio: np.ndarray) -> np.ndarray:
if audio.size <= 0:
return np.zeros((0,), dtype=np.int16)
clipped = np.clip(np.asarray(audio, dtype=np.float32), -1.0, 1.0)
return np.rint(clipped * 32767.0).astype(np.int16)
def collect_fixed_phrases(
options: CollectOptions,
*,
input_func: Callable[[str], str] = input,
output_func: Callable[[str], None] = print,
record_sample_fn: Callable[[CollectOptions, Callable[[str], str]], tuple[np.ndarray, int, int]]
| None = None,
) -> CollectResult:
_validate_options(options)
phrases = load_phrases(options.phrases_file)
slug_map = _build_slug_map(phrases)
session_id = _resolve_session_id(options.session_id)
out_dir = options.out_dir.expanduser().resolve()
samples_root = out_dir / "samples"
manifest_path = out_dir / "manifest.jsonl"
if not options.overwrite_session and _session_has_samples(samples_root, session_id):
raise RuntimeError(
f"session '{session_id}' already has samples in {samples_root}; use --overwrite-session"
)
out_dir.mkdir(parents=True, exist_ok=True)
recorder = record_sample_fn or _record_sample_manual_stop
target = len(phrases) * options.samples_per_phrase
written = 0
output_func(
"collecting fixed-phrase samples: "
f"session={session_id} phrases={len(phrases)} samples_per_phrase={options.samples_per_phrase}"
)
for phrase in phrases:
slug = slug_map[phrase]
phrase_dir = samples_root / slug
phrase_dir.mkdir(parents=True, exist_ok=True)
output_func(f'phrase: "{phrase}"')
sample_index = 1
while sample_index <= options.samples_per_phrase:
choice = input_func(
f"sample {sample_index}/{options.samples_per_phrase} - press Enter to start "
"(or 'q' to stop this session): "
).strip()
if choice.casefold() in {"q", "quit", "exit"}:
output_func("collection interrupted by user")
return CollectResult(
session_id=session_id,
phrases=len(phrases),
samples_per_phrase=options.samples_per_phrase,
samples_target=target,
samples_written=written,
out_dir=out_dir,
manifest_path=manifest_path,
interrupted=True,
)
audio, frame_count, duration_ms = recorder(options, input_func)
if frame_count <= 0:
output_func("captured empty sample; retrying the same index")
continue
wav_path = phrase_dir / f"{session_id}__{sample_index:03d}.wav"
_write_wav_file(wav_path, audio, samplerate=options.samplerate, channels=options.channels)
row = {
"session_id": session_id,
"timestamp_utc": _utc_now_iso(),
"phrase": phrase,
"phrase_slug": slug,
"sample_index": sample_index,
"wav_path": _path_for_manifest(wav_path),
"samplerate": options.samplerate,
"channels": options.channels,
"duration_ms": duration_ms,
"frames": frame_count,
"device_spec": options.device_spec,
"collector_version": COLLECTOR_VERSION,
}
_append_manifest_row(manifest_path, row)
written += 1
output_func(
f"saved sample {written}/{target}: {row['wav_path']} "
f"(duration_ms={duration_ms}, frames={frame_count})"
)
sample_index += 1
return CollectResult(
session_id=session_id,
phrases=len(phrases),
samples_per_phrase=options.samples_per_phrase,
samples_target=target,
samples_written=written,
out_dir=out_dir,
manifest_path=manifest_path,
interrupted=False,
)
def _record_sample_manual_stop(
options: CollectOptions,
input_func: Callable[[str], str],
) -> tuple[np.ndarray, int, int]:
sd = _sounddevice()
frames: list[np.ndarray] = []
device = _resolve_device_or_raise(options.device_spec)
def callback(indata, _frames, _time, _status):
frames.append(indata.copy())
stream = sd.InputStream(
samplerate=options.samplerate,
channels=options.channels,
dtype="float32",
device=device,
callback=callback,
)
stream.start()
try:
input_func("recording... press Enter to stop: ")
finally:
stop_error = None
try:
stream.stop()
except Exception as exc: # pragma: no cover - exercised via recorder tests, hard to force here
stop_error = exc
try:
stream.close()
except Exception as exc: # pragma: no cover - exercised via recorder tests, hard to force here
if stop_error is None:
raise
raise RuntimeError(f"recording stop failed ({stop_error}) and close also failed ({exc})") from exc
if stop_error is not None:
raise stop_error
audio = _flatten_frames(frames, channels=options.channels)
frame_count = int(audio.shape[0]) if audio.ndim == 2 else int(audio.shape[0])
duration_ms = int(round((frame_count / float(options.samplerate)) * 1000.0))
return audio, frame_count, duration_ms
def _validate_options(options: CollectOptions) -> None:
if options.samples_per_phrase < 1:
raise RuntimeError("samples_per_phrase must be >= 1")
if options.samplerate < 1:
raise RuntimeError("samplerate must be >= 1")
if options.channels < 1:
raise RuntimeError("channels must be >= 1")
def _resolve_session_id(value: str | None) -> str:
text = (value or "").strip()
if text:
return text
return datetime.now(timezone.utc).strftime("session-%Y%m%dT%H%M%SZ")
def _build_slug_map(phrases: list[str]) -> dict[str, str]:
out: dict[str, str] = {}
used: dict[str, str] = {}
for phrase in phrases:
slug = slugify_phrase(phrase)
previous = used.get(slug)
if previous is not None and previous != phrase:
raise RuntimeError(
f'phrases "{previous}" and "{phrase}" map to the same slug "{slug}"'
)
used[slug] = phrase
out[phrase] = slug
return out
def _session_has_samples(samples_root: Path, session_id: str) -> bool:
if not samples_root.exists():
return False
pattern = f"{session_id}__*.wav"
return any(samples_root.rglob(pattern))
def _flatten_frames(frames: list[np.ndarray], *, channels: int) -> np.ndarray:
if not frames:
return np.zeros((0, channels), dtype=np.float32)
data = np.concatenate(frames, axis=0)
if data.ndim == 1:
data = data.reshape(-1, 1)
if data.ndim != 2:
raise RuntimeError(f"unexpected recorded frame shape: {data.shape}")
return np.asarray(data, dtype=np.float32)
def _write_wav_file(path: Path, audio: np.ndarray, *, samplerate: int, channels: int) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
pcm = float_to_pcm16(audio)
with wave.open(str(path), "wb") as handle:
handle.setnchannels(channels)
handle.setsampwidth(2)
handle.setframerate(samplerate)
handle.writeframes(pcm.tobytes())
def _append_manifest_row(manifest_path: Path, row: dict[str, object]) -> None:
manifest_path.parent.mkdir(parents=True, exist_ok=True)
with manifest_path.open("a", encoding="utf-8") as handle:
handle.write(f"{json.dumps(row, ensure_ascii=False)}\n")
handle.flush()
def _path_for_manifest(path: Path) -> str:
try:
rel = path.resolve().relative_to(Path.cwd().resolve())
return rel.as_posix()
except Exception:
return path.as_posix()
def _utc_now_iso() -> str:
return datetime.now(timezone.utc).isoformat(timespec="milliseconds").replace("+00:00", "Z")
def _resolve_device_or_raise(spec: str | int | None) -> int | None:
device = resolve_input_device(spec)
if not _is_explicit_device_spec(spec):
return device
if device is not None:
return device
raise RuntimeError(
f"input device '{spec}' did not match any input device; available: {_available_inputs_summary()}"
)
def _is_explicit_device_spec(spec: str | int | None) -> bool:
if spec is None:
return False
if isinstance(spec, int):
return True
return bool(str(spec).strip())
def _available_inputs_summary(limit: int = 8) -> str:
devices = list_input_devices()
if not devices:
return "<none>"
items = [f"{d['index']}:{d['name']}" for d in devices[:limit]]
if len(devices) > limit:
items.append("...")
return ", ".join(items)
def _sounddevice():
try:
import sounddevice as sd # type: ignore[import-not-found]
except ModuleNotFoundError as exc:
raise RuntimeError(
"sounddevice is not installed; install dependencies with `uv sync --extra x11`"
) from exc
return sd

670
src/vosk_eval.py Normal file
View file

@ -0,0 +1,670 @@
from __future__ import annotations
import json
import statistics
import time
import wave
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Callable, Iterable
DEFAULT_KEYSTROKE_INTENTS_PATH = Path("exploration/vosk/keystrokes/intents.json")
DEFAULT_KEYSTROKE_LITERAL_MANIFEST_PATH = Path("exploration/vosk/keystrokes/literal/manifest.jsonl")
DEFAULT_KEYSTROKE_NATO_MANIFEST_PATH = Path("exploration/vosk/keystrokes/nato/manifest.jsonl")
DEFAULT_KEYSTROKE_EVAL_OUTPUT_DIR = Path("exploration/vosk/keystrokes/eval_runs")
DEFAULT_KEYSTROKE_MODELS = [
{
"name": "vosk-small-en-us-0.15",
"path": "/tmp/vosk-models/vosk-model-small-en-us-0.15",
},
{
"name": "vosk-en-us-0.22-lgraph",
"path": "/tmp/vosk-models/vosk-model-en-us-0.22-lgraph",
},
]
@dataclass(frozen=True)
class IntentSpec:
intent_id: str
literal_phrase: str
nato_phrase: str
letter: str
modifier: str
@dataclass(frozen=True)
class ModelSpec:
name: str
path: Path
@dataclass(frozen=True)
class ManifestSample:
wav_path: Path
expected_phrase: str
expected_intent: str
expected_letter: str
expected_modifier: str
@dataclass(frozen=True)
class DecodedRow:
wav_path: str
expected_phrase: str
hypothesis: str
expected_intent: str
predicted_intent: str | None
expected_letter: str
predicted_letter: str | None
expected_modifier: str
predicted_modifier: str | None
intent_match: bool
audio_ms: float
decode_ms: float
rtf: float | None
out_of_grammar: bool
def run_vosk_keystroke_eval(
*,
literal_manifest: str | Path,
nato_manifest: str | Path,
intents_path: str | Path,
output_dir: str | Path,
models_file: str | Path | None = None,
verbose: bool = False,
) -> dict[str, Any]:
intents = load_keystroke_intents(intents_path)
literal_index = build_phrase_to_intent_index(intents, grammar="literal")
nato_index = build_phrase_to_intent_index(intents, grammar="nato")
literal_samples = load_manifest_samples(literal_manifest, literal_index)
nato_samples = load_manifest_samples(nato_manifest, nato_index)
model_specs = load_model_specs(models_file)
if not model_specs:
raise RuntimeError("no model specs provided")
run_id = datetime.now(timezone.utc).strftime("run-%Y%m%dT%H%M%SZ")
base_output_dir = Path(output_dir)
run_output_dir = (base_output_dir / run_id).resolve()
run_output_dir.mkdir(parents=True, exist_ok=True)
summary: dict[str, Any] = {
"report_version": 1,
"run_id": run_id,
"literal_manifest": str(Path(literal_manifest)),
"nato_manifest": str(Path(nato_manifest)),
"intents_path": str(Path(intents_path)),
"models_file": str(models_file) if models_file else "",
"models": [],
"skipped_models": [],
"winners": {},
"cross_grammar_delta": [],
"output_dir": str(run_output_dir),
}
for model in model_specs:
if not model.path.exists():
summary["skipped_models"].append(
{
"name": model.name,
"path": str(model.path),
"reason": "model path does not exist",
}
)
continue
model_report = _evaluate_model(
model,
literal_samples=literal_samples,
nato_samples=nato_samples,
literal_index=literal_index,
nato_index=nato_index,
output_dir=run_output_dir,
verbose=verbose,
)
summary["models"].append(model_report)
if not summary["models"]:
raise RuntimeError("no models were successfully evaluated")
summary["winners"] = _pick_winners(summary["models"])
summary["cross_grammar_delta"] = _cross_grammar_delta(summary["models"])
summary_path = run_output_dir / "summary.json"
summary["summary_path"] = str(summary_path)
summary_path.write_text(f"{json.dumps(summary, indent=2, ensure_ascii=False)}\n", encoding="utf-8")
return summary
def load_keystroke_intents(path: str | Path) -> list[IntentSpec]:
payload = _load_json(path, description="intents")
if not isinstance(payload, list):
raise RuntimeError("intents file must be a JSON array")
intents: list[IntentSpec] = []
seen_ids: set[str] = set()
seen_literal: set[str] = set()
seen_nato: set[str] = set()
for idx, item in enumerate(payload):
if not isinstance(item, dict):
raise RuntimeError(f"intents[{idx}] must be an object")
intent_id = str(item.get("intent_id", "")).strip()
literal_phrase = str(item.get("literal_phrase", "")).strip()
nato_phrase = str(item.get("nato_phrase", "")).strip()
letter = str(item.get("letter", "")).strip().casefold()
modifier = str(item.get("modifier", "")).strip().casefold()
if not intent_id:
raise RuntimeError(f"intents[{idx}].intent_id is required")
if not literal_phrase:
raise RuntimeError(f"intents[{idx}].literal_phrase is required")
if not nato_phrase:
raise RuntimeError(f"intents[{idx}].nato_phrase is required")
if letter not in {"d", "b", "p"}:
raise RuntimeError(f"intents[{idx}].letter must be one of d/b/p")
if modifier not in {"ctrl", "shift", "ctrl+shift"}:
raise RuntimeError(f"intents[{idx}].modifier must be ctrl/shift/ctrl+shift")
norm_id = _norm(intent_id)
norm_literal = _norm(literal_phrase)
norm_nato = _norm(nato_phrase)
if norm_id in seen_ids:
raise RuntimeError(f"duplicate intent_id '{intent_id}'")
if norm_literal in seen_literal:
raise RuntimeError(f"duplicate literal_phrase '{literal_phrase}'")
if norm_nato in seen_nato:
raise RuntimeError(f"duplicate nato_phrase '{nato_phrase}'")
seen_ids.add(norm_id)
seen_literal.add(norm_literal)
seen_nato.add(norm_nato)
intents.append(
IntentSpec(
intent_id=intent_id,
literal_phrase=literal_phrase,
nato_phrase=nato_phrase,
letter=letter,
modifier=modifier,
)
)
if not intents:
raise RuntimeError("intents file is empty")
return intents
def build_phrase_to_intent_index(
intents: list[IntentSpec],
*,
grammar: str,
) -> dict[str, IntentSpec]:
if grammar not in {"literal", "nato"}:
raise RuntimeError(f"unsupported grammar type '{grammar}'")
out: dict[str, IntentSpec] = {}
for spec in intents:
phrase = spec.literal_phrase if grammar == "literal" else spec.nato_phrase
key = _norm(phrase)
if key in out:
raise RuntimeError(f"duplicate phrase mapping for grammar {grammar}: '{phrase}'")
out[key] = spec
return out
def load_manifest_samples(
path: str | Path,
phrase_index: dict[str, IntentSpec],
) -> list[ManifestSample]:
manifest_path = Path(path)
if not manifest_path.exists():
raise RuntimeError(f"manifest file does not exist: {manifest_path}")
rows = manifest_path.read_text(encoding="utf-8").splitlines()
samples: list[ManifestSample] = []
for idx, raw in enumerate(rows, start=1):
text = raw.strip()
if not text:
continue
try:
payload = json.loads(text)
except Exception as exc:
raise RuntimeError(f"invalid manifest json at line {idx}: {exc}") from exc
if not isinstance(payload, dict):
raise RuntimeError(f"manifest line {idx} must be an object")
phrase = str(payload.get("phrase", "")).strip()
wav_path_raw = str(payload.get("wav_path", "")).strip()
if not phrase:
raise RuntimeError(f"manifest line {idx} missing phrase")
if not wav_path_raw:
raise RuntimeError(f"manifest line {idx} missing wav_path")
spec = phrase_index.get(_norm(phrase))
if spec is None:
raise RuntimeError(
f"manifest line {idx} phrase '{phrase}' does not exist in grammar index"
)
wav_path = _resolve_manifest_wav_path(
wav_path_raw,
manifest_dir=manifest_path.parent,
)
if not wav_path.exists():
raise RuntimeError(f"manifest line {idx} wav_path does not exist: {wav_path}")
samples.append(
ManifestSample(
wav_path=wav_path,
expected_phrase=phrase,
expected_intent=spec.intent_id,
expected_letter=spec.letter,
expected_modifier=spec.modifier,
)
)
if not samples:
raise RuntimeError(f"manifest has no samples: {manifest_path}")
return samples
def load_model_specs(path: str | Path | None) -> list[ModelSpec]:
if path is None:
return [
ModelSpec(
name=str(row["name"]),
path=Path(str(row["path"])).expanduser().resolve(),
)
for row in DEFAULT_KEYSTROKE_MODELS
]
models_path = Path(path)
payload = _load_json(models_path, description="model specs")
if not isinstance(payload, list):
raise RuntimeError("models file must be a JSON array")
specs: list[ModelSpec] = []
seen: set[str] = set()
for idx, item in enumerate(payload):
if not isinstance(item, dict):
raise RuntimeError(f"models[{idx}] must be an object")
name = str(item.get("name", "")).strip()
path_raw = str(item.get("path", "")).strip()
if not name:
raise RuntimeError(f"models[{idx}].name is required")
if not path_raw:
raise RuntimeError(f"models[{idx}].path is required")
key = _norm(name)
if key in seen:
raise RuntimeError(f"duplicate model name '{name}' in models file")
seen.add(key)
model_path = Path(path_raw).expanduser()
if not model_path.is_absolute():
model_path = (models_path.parent / model_path).resolve()
else:
model_path = model_path.resolve()
specs.append(ModelSpec(name=name, path=model_path))
return specs
def summarize_decoded_rows(rows: list[DecodedRow]) -> dict[str, Any]:
if not rows:
return {
"samples": 0,
"intent_match_count": 0,
"intent_accuracy": 0.0,
"unknown_count": 0,
"unknown_rate": 0.0,
"out_of_grammar_count": 0,
"latency_ms": {"avg": 0.0, "p50": 0.0, "p95": 0.0},
"rtf_avg": 0.0,
"intent_breakdown": {},
"modifier_breakdown": {},
"letter_breakdown": {},
"intent_confusion": {},
"letter_confusion": {},
"top_raw_mismatches": [],
}
sample_count = len(rows)
intent_match_count = sum(1 for row in rows if row.intent_match)
unknown_count = sum(1 for row in rows if row.predicted_intent is None)
out_of_grammar_count = sum(1 for row in rows if row.out_of_grammar)
decode_values = sorted(row.decode_ms for row in rows)
p50 = statistics.median(decode_values)
p95 = decode_values[int(round((len(decode_values) - 1) * 0.95))]
rtf_values = [row.rtf for row in rows if row.rtf is not None]
rtf_avg = float(sum(rtf_values) / len(rtf_values)) if rtf_values else 0.0
intent_breakdown: dict[str, dict[str, float | int]] = {}
modifier_breakdown: dict[str, dict[str, float | int]] = {}
letter_breakdown: dict[str, dict[str, float | int]] = {}
intent_confusion: dict[str, dict[str, int]] = {}
letter_confusion: dict[str, dict[str, int]] = {}
raw_mismatch_counts: dict[tuple[str, str], int] = {}
for row in rows:
_inc_metric_bucket(intent_breakdown, row.expected_intent, row.intent_match)
_inc_metric_bucket(modifier_breakdown, row.expected_modifier, row.intent_match)
_inc_metric_bucket(letter_breakdown, row.expected_letter, row.intent_match)
predicted_intent = row.predicted_intent if row.predicted_intent else "__none__"
predicted_letter = row.predicted_letter if row.predicted_letter else "__none__"
_inc_confusion(intent_confusion, row.expected_intent, predicted_intent)
_inc_confusion(letter_confusion, row.expected_letter, predicted_letter)
if not row.intent_match:
key = (row.expected_phrase, row.hypothesis)
raw_mismatch_counts[key] = raw_mismatch_counts.get(key, 0) + 1
_finalize_metric_buckets(intent_breakdown)
_finalize_metric_buckets(modifier_breakdown)
_finalize_metric_buckets(letter_breakdown)
top_raw_mismatches = [
{
"expected_phrase": expected_phrase,
"hypothesis": hypothesis,
"count": count,
}
for (expected_phrase, hypothesis), count in sorted(
raw_mismatch_counts.items(),
key=lambda item: item[1],
reverse=True,
)[:20]
]
return {
"samples": sample_count,
"intent_match_count": intent_match_count,
"intent_accuracy": intent_match_count / sample_count,
"unknown_count": unknown_count,
"unknown_rate": unknown_count / sample_count,
"out_of_grammar_count": out_of_grammar_count,
"latency_ms": {
"avg": sum(decode_values) / sample_count,
"p50": p50,
"p95": p95,
},
"rtf_avg": rtf_avg,
"intent_breakdown": intent_breakdown,
"modifier_breakdown": modifier_breakdown,
"letter_breakdown": letter_breakdown,
"intent_confusion": intent_confusion,
"letter_confusion": letter_confusion,
"top_raw_mismatches": top_raw_mismatches,
}
def _evaluate_model(
model: ModelSpec,
*,
literal_samples: list[ManifestSample],
nato_samples: list[ManifestSample],
literal_index: dict[str, IntentSpec],
nato_index: dict[str, IntentSpec],
output_dir: Path,
verbose: bool,
) -> dict[str, Any]:
_ModelClass, recognizer_factory = _load_vosk_bindings()
started = time.perf_counter()
vosk_model = _ModelClass(str(model.path))
model_load_ms = (time.perf_counter() - started) * 1000.0
grammar_reports: dict[str, Any] = {}
for grammar, samples, index in (
("literal", literal_samples, literal_index),
("nato", nato_samples, nato_index),
):
phrases = _phrases_for_grammar(index.values(), grammar=grammar)
norm_allowed = {_norm(item) for item in phrases}
decoded: list[DecodedRow] = []
for sample in samples:
hypothesis, audio_ms, decode_ms = _decode_sample_with_grammar(
recognizer_factory,
vosk_model,
sample.wav_path,
phrases,
)
hyp_norm = _norm(hypothesis)
spec = index.get(hyp_norm)
predicted_intent = spec.intent_id if spec is not None else None
predicted_letter = spec.letter if spec is not None else None
predicted_modifier = spec.modifier if spec is not None else None
out_of_grammar = bool(hyp_norm) and hyp_norm not in norm_allowed
decoded.append(
DecodedRow(
wav_path=str(sample.wav_path),
expected_phrase=sample.expected_phrase,
hypothesis=hypothesis,
expected_intent=sample.expected_intent,
predicted_intent=predicted_intent,
expected_letter=sample.expected_letter,
predicted_letter=predicted_letter,
expected_modifier=sample.expected_modifier,
predicted_modifier=predicted_modifier,
intent_match=sample.expected_intent == predicted_intent,
audio_ms=audio_ms,
decode_ms=decode_ms,
rtf=(decode_ms / audio_ms) if audio_ms > 0 else None,
out_of_grammar=out_of_grammar,
)
)
report = summarize_decoded_rows(decoded)
if report["out_of_grammar_count"] > 0:
raise RuntimeError(
f"model '{model.name}' produced {report['out_of_grammar_count']} out-of-grammar "
f"hypotheses for grammar '{grammar}'"
)
sample_path = output_dir / f"{grammar}__{_safe_filename(model.name)}__samples.jsonl"
_write_samples_report(sample_path, decoded)
report["samples_report"] = str(sample_path)
if verbose:
print(
f"vosk-eval[{model.name}][{grammar}]: "
f"acc={report['intent_accuracy']:.3f} "
f"p50={report['latency_ms']['p50']:.1f}ms "
f"p95={report['latency_ms']['p95']:.1f}ms"
)
grammar_reports[grammar] = report
literal_acc = float(grammar_reports["literal"]["intent_accuracy"])
nato_acc = float(grammar_reports["nato"]["intent_accuracy"])
literal_p50 = float(grammar_reports["literal"]["latency_ms"]["p50"])
nato_p50 = float(grammar_reports["nato"]["latency_ms"]["p50"])
overall_accuracy = (literal_acc + nato_acc) / 2.0
overall_latency_p50 = (literal_p50 + nato_p50) / 2.0
return {
"name": model.name,
"path": str(model.path),
"model_load_ms": model_load_ms,
"literal": grammar_reports["literal"],
"nato": grammar_reports["nato"],
"overall": {
"avg_intent_accuracy": overall_accuracy,
"avg_latency_p50_ms": overall_latency_p50,
},
}
def _decode_sample_with_grammar(
recognizer_factory: Callable[[Any, float, str], Any],
vosk_model: Any,
wav_path: Path,
phrases: list[str],
) -> tuple[str, float, float]:
with wave.open(str(wav_path), "rb") as handle:
channels = handle.getnchannels()
sample_width = handle.getsampwidth()
sample_rate = float(handle.getframerate())
frame_count = handle.getnframes()
payload = handle.readframes(frame_count)
if channels != 1 or sample_width != 2:
raise RuntimeError(
f"unsupported wav format for {wav_path}: channels={channels} sample_width={sample_width}"
)
recognizer = recognizer_factory(vosk_model, sample_rate, json.dumps(phrases))
if hasattr(recognizer, "SetWords"):
recognizer.SetWords(False)
started = time.perf_counter()
recognizer.AcceptWaveform(payload)
result = recognizer.FinalResult()
decode_ms = (time.perf_counter() - started) * 1000.0
audio_ms = (frame_count / sample_rate) * 1000.0
try:
text = str(json.loads(result).get("text", "")).strip()
except Exception:
text = ""
return text, audio_ms, decode_ms
def _pick_winners(models: list[dict[str, Any]]) -> dict[str, Any]:
winners: dict[str, Any] = {}
for grammar in ("literal", "nato"):
ranked = sorted(
models,
key=lambda item: (
float(item[grammar]["intent_accuracy"]),
-float(item[grammar]["latency_ms"]["p50"]),
),
reverse=True,
)
best = ranked[0]
winners[grammar] = {
"name": best["name"],
"intent_accuracy": best[grammar]["intent_accuracy"],
"latency_p50_ms": best[grammar]["latency_ms"]["p50"],
}
ranked_overall = sorted(
models,
key=lambda item: (
float(item["overall"]["avg_intent_accuracy"]),
-float(item["overall"]["avg_latency_p50_ms"]),
),
reverse=True,
)
winners["overall"] = {
"name": ranked_overall[0]["name"],
"avg_intent_accuracy": ranked_overall[0]["overall"]["avg_intent_accuracy"],
"avg_latency_p50_ms": ranked_overall[0]["overall"]["avg_latency_p50_ms"],
}
return winners
def _cross_grammar_delta(models: list[dict[str, Any]]) -> list[dict[str, Any]]:
rows: list[dict[str, Any]] = []
for model in models:
literal_acc = float(model["literal"]["intent_accuracy"])
nato_acc = float(model["nato"]["intent_accuracy"])
rows.append(
{
"name": model["name"],
"intent_accuracy_delta_nato_minus_literal": nato_acc - literal_acc,
"literal_intent_accuracy": literal_acc,
"nato_intent_accuracy": nato_acc,
}
)
rows.sort(key=lambda item: item["intent_accuracy_delta_nato_minus_literal"], reverse=True)
return rows
def _write_samples_report(path: Path, rows: list[DecodedRow]) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w", encoding="utf-8") as handle:
for row in rows:
payload = {
"wav_path": row.wav_path,
"expected_phrase": row.expected_phrase,
"hypothesis": row.hypothesis,
"expected_intent": row.expected_intent,
"predicted_intent": row.predicted_intent,
"expected_letter": row.expected_letter,
"predicted_letter": row.predicted_letter,
"expected_modifier": row.expected_modifier,
"predicted_modifier": row.predicted_modifier,
"intent_match": row.intent_match,
"audio_ms": row.audio_ms,
"decode_ms": row.decode_ms,
"rtf": row.rtf,
"out_of_grammar": row.out_of_grammar,
}
handle.write(f"{json.dumps(payload, ensure_ascii=False)}\n")
def _load_vosk_bindings() -> tuple[Any, Callable[[Any, float, str], Any]]:
try:
from vosk import KaldiRecognizer, Model, SetLogLevel # type: ignore[import-not-found]
except ModuleNotFoundError as exc:
raise RuntimeError(
"vosk is not installed; run with `uv run --with vosk aman eval-vosk-keystrokes ...`"
) from exc
SetLogLevel(-1)
return Model, KaldiRecognizer
def _phrases_for_grammar(
specs: Iterable[IntentSpec],
*,
grammar: str,
) -> list[str]:
if grammar not in {"literal", "nato"}:
raise RuntimeError(f"unsupported grammar type '{grammar}'")
out: list[str] = []
seen: set[str] = set()
for spec in specs:
phrase = spec.literal_phrase if grammar == "literal" else spec.nato_phrase
key = _norm(phrase)
if key in seen:
continue
seen.add(key)
out.append(phrase)
return sorted(out)
def _inc_metric_bucket(table: dict[str, dict[str, float | int]], key: str, matched: bool) -> None:
bucket = table.setdefault(key, {"total": 0, "matches": 0, "accuracy": 0.0})
bucket["total"] = int(bucket["total"]) + 1
if matched:
bucket["matches"] = int(bucket["matches"]) + 1
def _finalize_metric_buckets(table: dict[str, dict[str, float | int]]) -> None:
for bucket in table.values():
total = int(bucket["total"])
matches = int(bucket["matches"])
bucket["accuracy"] = (matches / total) if total else 0.0
def _inc_confusion(table: dict[str, dict[str, int]], expected: str, predicted: str) -> None:
row = table.setdefault(expected, {})
row[predicted] = int(row.get(predicted, 0)) + 1
def _safe_filename(value: str) -> str:
out = []
for ch in value:
if ch.isalnum() or ch in {"-", "_", "."}:
out.append(ch)
else:
out.append("_")
return "".join(out).strip("_") or "model"
def _load_json(path: str | Path, *, description: str) -> Any:
data_path = Path(path)
if not data_path.exists():
raise RuntimeError(f"{description} file does not exist: {data_path}")
try:
return json.loads(data_path.read_text(encoding="utf-8"))
except Exception as exc:
raise RuntimeError(f"invalid {description} json '{data_path}': {exc}") from exc
def _resolve_manifest_wav_path(raw_value: str, *, manifest_dir: Path) -> Path:
candidate = Path(raw_value).expanduser()
if candidate.is_absolute():
return candidate.resolve()
cwd_candidate = (Path.cwd() / candidate).resolve()
if cwd_candidate.exists():
return cwd_candidate
manifest_candidate = (manifest_dir / candidate).resolve()
if manifest_candidate.exists():
return manifest_candidate
return cwd_candidate
def _norm(value: str) -> str:
return " ".join((value or "").strip().casefold().split())

View file

@ -47,6 +47,15 @@ class AlignmentHeuristicEngineTests(unittest.TestCase):
self.assertEqual(result.applied_count, 1)
self.assertTrue(any(item.rule_id == "cue_correction" for item in result.decisions))
def test_applies_i_mean_tail_correction_without_asr_words(self):
engine = AlignmentHeuristicEngine()
result = engine.apply("schedule for 5, i mean 6", [])
self.assertEqual(result.draft_text, "schedule for 6")
self.assertEqual(result.applied_count, 1)
self.assertTrue(any(item.rule_id == "cue_correction" for item in result.decisions))
def test_preserves_literal_i_mean_context(self):
engine = AlignmentHeuristicEngine()
words = _words(["write", "exactly", "i", "mean", "this", "sincerely"])
@ -57,6 +66,15 @@ class AlignmentHeuristicEngineTests(unittest.TestCase):
self.assertEqual(result.applied_count, 0)
self.assertGreaterEqual(result.skipped_count, 1)
def test_preserves_literal_i_mean_context_without_asr_words(self):
engine = AlignmentHeuristicEngine()
result = engine.apply("write exactly i mean this sincerely", [])
self.assertEqual(result.draft_text, "write exactly i mean this sincerely")
self.assertEqual(result.applied_count, 0)
self.assertGreaterEqual(result.skipped_count, 1)
def test_collapses_exact_restart_repetition(self):
engine = AlignmentHeuristicEngine()
words = _words(["please", "send", "it", "please", "send", "it"])

View file

@ -141,6 +141,64 @@ class AmanCliTests(unittest.TestCase):
with self.assertRaises(SystemExit):
aman._parse_cli_args(["bench"])
def test_parse_cli_args_collect_fixed_phrases_command(self):
args = aman._parse_cli_args(
[
"collect-fixed-phrases",
"--phrases-file",
"exploration/vosk/fixed_phrases/phrases.txt",
"--out-dir",
"exploration/vosk/fixed_phrases",
"--samples-per-phrase",
"10",
"--samplerate",
"16000",
"--channels",
"1",
"--device",
"2",
"--session-id",
"session-123",
"--overwrite-session",
"--json",
]
)
self.assertEqual(args.command, "collect-fixed-phrases")
self.assertEqual(args.phrases_file, "exploration/vosk/fixed_phrases/phrases.txt")
self.assertEqual(args.out_dir, "exploration/vosk/fixed_phrases")
self.assertEqual(args.samples_per_phrase, 10)
self.assertEqual(args.samplerate, 16000)
self.assertEqual(args.channels, 1)
self.assertEqual(args.device, "2")
self.assertEqual(args.session_id, "session-123")
self.assertTrue(args.overwrite_session)
self.assertTrue(args.json)
def test_parse_cli_args_eval_vosk_keystrokes_command(self):
args = aman._parse_cli_args(
[
"eval-vosk-keystrokes",
"--literal-manifest",
"exploration/vosk/keystrokes/literal/manifest.jsonl",
"--nato-manifest",
"exploration/vosk/keystrokes/nato/manifest.jsonl",
"--intents",
"exploration/vosk/keystrokes/intents.json",
"--output-dir",
"exploration/vosk/keystrokes/eval_runs",
"--models-file",
"exploration/vosk/keystrokes/models.json",
"--json",
]
)
self.assertEqual(args.command, "eval-vosk-keystrokes")
self.assertEqual(args.literal_manifest, "exploration/vosk/keystrokes/literal/manifest.jsonl")
self.assertEqual(args.nato_manifest, "exploration/vosk/keystrokes/nato/manifest.jsonl")
self.assertEqual(args.intents, "exploration/vosk/keystrokes/intents.json")
self.assertEqual(args.output_dir, "exploration/vosk/keystrokes/eval_runs")
self.assertEqual(args.models_file, "exploration/vosk/keystrokes/models.json")
self.assertTrue(args.json)
def test_parse_cli_args_eval_models_command(self):
args = aman._parse_cli_args(
["eval-models", "--dataset", "benchmarks/cleanup_dataset.jsonl", "--matrix", "benchmarks/model_matrix.small_first.json"]
@ -379,6 +437,83 @@ class AmanCliTests(unittest.TestCase):
payload = json.loads(out.getvalue())
self.assertEqual(payload["written_rows"], 4)
def test_collect_fixed_phrases_command_rejects_non_positive_samples_per_phrase(self):
args = aman._parse_cli_args(
["collect-fixed-phrases", "--samples-per-phrase", "0"]
)
exit_code = aman._collect_fixed_phrases_command(args)
self.assertEqual(exit_code, 1)
def test_collect_fixed_phrases_command_json_output(self):
args = aman._parse_cli_args(
[
"collect-fixed-phrases",
"--phrases-file",
"exploration/vosk/fixed_phrases/phrases.txt",
"--out-dir",
"exploration/vosk/fixed_phrases",
"--samples-per-phrase",
"2",
"--json",
]
)
out = io.StringIO()
fake_result = SimpleNamespace(
session_id="session-1",
phrases=2,
samples_per_phrase=2,
samples_target=4,
samples_written=4,
out_dir=Path("/tmp/out"),
manifest_path=Path("/tmp/out/manifest.jsonl"),
interrupted=False,
)
with patch("aman.collect_fixed_phrases", return_value=fake_result), patch("sys.stdout", out):
exit_code = aman._collect_fixed_phrases_command(args)
self.assertEqual(exit_code, 0)
payload = json.loads(out.getvalue())
self.assertEqual(payload["session_id"], "session-1")
self.assertEqual(payload["samples_written"], 4)
self.assertFalse(payload["interrupted"])
def test_eval_vosk_keystrokes_command_json_output(self):
args = aman._parse_cli_args(
[
"eval-vosk-keystrokes",
"--literal-manifest",
"exploration/vosk/keystrokes/literal/manifest.jsonl",
"--nato-manifest",
"exploration/vosk/keystrokes/nato/manifest.jsonl",
"--intents",
"exploration/vosk/keystrokes/intents.json",
"--output-dir",
"exploration/vosk/keystrokes/eval_runs",
"--json",
]
)
out = io.StringIO()
fake_summary = {
"models": [
{
"name": "vosk-small-en-us-0.15",
"literal": {"intent_accuracy": 1.0, "latency_ms": {"p50": 30.0}},
"nato": {"intent_accuracy": 0.9, "latency_ms": {"p50": 35.0}},
}
],
"winners": {
"literal": {"name": "vosk-small-en-us-0.15", "intent_accuracy": 1.0, "latency_p50_ms": 30.0},
"nato": {"name": "vosk-small-en-us-0.15", "intent_accuracy": 0.9, "latency_p50_ms": 35.0},
"overall": {"name": "vosk-small-en-us-0.15", "avg_intent_accuracy": 0.95, "avg_latency_p50_ms": 32.5},
},
"output_dir": "exploration/vosk/keystrokes/eval_runs/run-1",
}
with patch("aman.run_vosk_keystroke_eval", return_value=fake_summary), patch("sys.stdout", out):
exit_code = aman._eval_vosk_keystrokes_command(args)
self.assertEqual(exit_code, 0)
payload = json.loads(out.getvalue())
self.assertEqual(payload["models"][0]["name"], "vosk-small-en-us-0.15")
self.assertEqual(payload["winners"]["overall"]["name"], "vosk-small-en-us-0.15")
def test_sync_default_model_command_updates_constants(self):
with tempfile.TemporaryDirectory() as td:
report_path = Path(td) / "latest.json"

View file

@ -93,6 +93,23 @@ class PipelineEngineTests(unittest.TestCase):
self.assertEqual(result.fact_guard_action, "accepted")
self.assertEqual(result.fact_guard_violations, 0)
def test_run_transcript_without_words_applies_i_mean_correction(self):
editor = _FakeEditor()
pipeline = PipelineEngine(
asr_stage=None,
editor_stage=editor,
vocabulary=VocabularyEngine(VocabularyConfig()),
alignment_engine=AlignmentHeuristicEngine(),
)
result = pipeline.run_transcript("schedule for 5, i mean 6", language="en")
self.assertEqual(editor.calls[0]["transcript"], "schedule for 6")
self.assertEqual(result.output_text, "schedule for 6")
self.assertEqual(result.alignment_applied, 1)
self.assertEqual(result.fact_guard_action, "accepted")
self.assertEqual(result.fact_guard_violations, 0)
def test_fact_guard_fallbacks_when_editor_changes_number(self):
editor = _FakeEditor(output_text="set alarm for 8")
pipeline = PipelineEngine(

148
tests/test_vosk_collect.py Normal file
View file

@ -0,0 +1,148 @@
import json
import sys
import tempfile
import unittest
from pathlib import Path
import numpy as np
ROOT = Path(__file__).resolve().parents[1]
SRC = ROOT / "src"
if str(SRC) not in sys.path:
sys.path.insert(0, str(SRC))
from vosk_collect import CollectOptions, collect_fixed_phrases, float_to_pcm16, load_phrases, slugify_phrase
class VoskCollectTests(unittest.TestCase):
def test_load_phrases_ignores_blank_comment_and_deduplicates(self):
with tempfile.TemporaryDirectory() as td:
path = Path(td) / "phrases.txt"
path.write_text(
(
"# heading\n"
"\n"
"close app\n"
"take a screenshot\n"
"close app\n"
" \n"
),
encoding="utf-8",
)
phrases = load_phrases(path)
self.assertEqual(phrases, ["close app", "take a screenshot"])
def test_load_phrases_empty_after_filtering_raises(self):
with tempfile.TemporaryDirectory() as td:
path = Path(td) / "phrases.txt"
path.write_text("# only comments\n\n", encoding="utf-8")
with self.assertRaisesRegex(RuntimeError, "no usable labels"):
load_phrases(path)
def test_slugify_phrase_is_deterministic(self):
self.assertEqual(slugify_phrase("Take a Screenshot"), "take_a_screenshot")
self.assertEqual(slugify_phrase("close-app!!!"), "close_app")
def test_float_to_pcm16_clamps_audio_bounds(self):
values = np.asarray([-2.0, -1.0, -0.5, 0.0, 0.5, 1.0, 2.0], dtype=np.float32)
out = float_to_pcm16(values)
self.assertEqual(out.dtype, np.int16)
self.assertGreaterEqual(int(out.min()), -32767)
self.assertLessEqual(int(out.max()), 32767)
self.assertEqual(int(out[0]), -32767)
self.assertEqual(int(out[-1]), 32767)
def test_collect_fixed_phrases_writes_manifest_and_wavs(self):
with tempfile.TemporaryDirectory() as td:
root = Path(td)
phrases_path = root / "phrases.txt"
out_dir = root / "dataset"
phrases_path.write_text("close app\ntake a screenshot\n", encoding="utf-8")
options = CollectOptions(
phrases_file=phrases_path,
out_dir=out_dir,
samples_per_phrase=2,
samplerate=16000,
channels=1,
session_id="session-1",
)
answers = ["", "", "", ""]
def fake_input(_prompt: str) -> str:
return answers.pop(0)
def fake_record(_options: CollectOptions, _input_func):
audio = np.ones((320, 1), dtype=np.float32) * 0.1
return audio, 320, 20
result = collect_fixed_phrases(
options,
input_func=fake_input,
output_func=lambda _line: None,
record_sample_fn=fake_record,
)
self.assertFalse(result.interrupted)
self.assertEqual(result.samples_written, 4)
manifest = out_dir / "manifest.jsonl"
rows = [
json.loads(line)
for line in manifest.read_text(encoding="utf-8").splitlines()
if line.strip()
]
self.assertEqual(len(rows), 4)
required = {
"session_id",
"timestamp_utc",
"phrase",
"phrase_slug",
"sample_index",
"wav_path",
"samplerate",
"channels",
"duration_ms",
"frames",
"device_spec",
"collector_version",
}
self.assertTrue(required.issubset(rows[0].keys()))
wav_paths = [root / Path(row["wav_path"]) for row in rows]
for wav_path in wav_paths:
self.assertTrue(wav_path.exists(), f"missing wav: {wav_path}")
def test_collect_fixed_phrases_refuses_existing_session_without_overwrite(self):
with tempfile.TemporaryDirectory() as td:
root = Path(td)
phrases_path = root / "phrases.txt"
out_dir = root / "dataset"
phrases_path.write_text("close app\n", encoding="utf-8")
options = CollectOptions(
phrases_file=phrases_path,
out_dir=out_dir,
samples_per_phrase=1,
samplerate=16000,
channels=1,
session_id="session-1",
)
def fake_record(_options: CollectOptions, _input_func):
audio = np.ones((160, 1), dtype=np.float32) * 0.2
return audio, 160, 10
collect_fixed_phrases(
options,
input_func=lambda _prompt: "",
output_func=lambda _line: None,
record_sample_fn=fake_record,
)
with self.assertRaisesRegex(RuntimeError, "already has samples"):
collect_fixed_phrases(
options,
input_func=lambda _prompt: "",
output_func=lambda _line: None,
record_sample_fn=fake_record,
)
if __name__ == "__main__":
unittest.main()

327
tests/test_vosk_eval.py Normal file
View file

@ -0,0 +1,327 @@
import json
import sys
import tempfile
import unittest
import wave
from pathlib import Path
from unittest.mock import patch
ROOT = Path(__file__).resolve().parents[1]
SRC = ROOT / "src"
if str(SRC) not in sys.path:
sys.path.insert(0, str(SRC))
from vosk_eval import (
DecodedRow,
build_phrase_to_intent_index,
load_keystroke_intents,
run_vosk_keystroke_eval,
summarize_decoded_rows,
)
class VoskEvalTests(unittest.TestCase):
def test_load_keystroke_intents_parses_valid_payload(self):
with tempfile.TemporaryDirectory() as td:
path = Path(td) / "intents.json"
path.write_text(
json.dumps(
[
{
"intent_id": "ctrl+d",
"literal_phrase": "control d",
"nato_phrase": "control delta",
"letter": "d",
"modifier": "ctrl",
}
]
),
encoding="utf-8",
)
intents = load_keystroke_intents(path)
self.assertEqual(len(intents), 1)
self.assertEqual(intents[0].intent_id, "ctrl+d")
def test_load_keystroke_intents_rejects_duplicate_literal_phrase(self):
with tempfile.TemporaryDirectory() as td:
path = Path(td) / "intents.json"
path.write_text(
json.dumps(
[
{
"intent_id": "ctrl+d",
"literal_phrase": "control d",
"nato_phrase": "control delta",
"letter": "d",
"modifier": "ctrl",
},
{
"intent_id": "ctrl+b",
"literal_phrase": "control d",
"nato_phrase": "control bravo",
"letter": "b",
"modifier": "ctrl",
},
]
),
encoding="utf-8",
)
with self.assertRaisesRegex(RuntimeError, "duplicate literal_phrase"):
load_keystroke_intents(path)
def test_build_phrase_to_intent_index_uses_grammar_variant(self):
intents = [
load_keystroke_intents_from_inline(
"ctrl+d",
"control d",
"control delta",
"d",
"ctrl",
)
]
literal = build_phrase_to_intent_index(intents, grammar="literal")
nato = build_phrase_to_intent_index(intents, grammar="nato")
self.assertIn("control d", literal)
self.assertIn("control delta", nato)
def test_summarize_decoded_rows_reports_confusions(self):
rows = [
DecodedRow(
wav_path="a.wav",
expected_phrase="control d",
hypothesis="control d",
expected_intent="ctrl+d",
predicted_intent="ctrl+d",
expected_letter="d",
predicted_letter="d",
expected_modifier="ctrl",
predicted_modifier="ctrl",
intent_match=True,
audio_ms=1000.0,
decode_ms=100.0,
rtf=0.1,
out_of_grammar=False,
),
DecodedRow(
wav_path="b.wav",
expected_phrase="control b",
hypothesis="control p",
expected_intent="ctrl+b",
predicted_intent="ctrl+p",
expected_letter="b",
predicted_letter="p",
expected_modifier="ctrl",
predicted_modifier="ctrl",
intent_match=False,
audio_ms=1000.0,
decode_ms=120.0,
rtf=0.12,
out_of_grammar=False,
),
DecodedRow(
wav_path="c.wav",
expected_phrase="control p",
hypothesis="",
expected_intent="ctrl+p",
predicted_intent=None,
expected_letter="p",
predicted_letter=None,
expected_modifier="ctrl",
predicted_modifier=None,
intent_match=False,
audio_ms=1000.0,
decode_ms=90.0,
rtf=0.09,
out_of_grammar=False,
),
]
summary = summarize_decoded_rows(rows)
self.assertEqual(summary["samples"], 3)
self.assertAlmostEqual(summary["intent_accuracy"], 1 / 3, places=6)
self.assertEqual(summary["unknown_count"], 1)
self.assertEqual(summary["intent_confusion"]["ctrl+b"]["ctrl+p"], 1)
self.assertEqual(summary["letter_confusion"]["p"]["__none__"], 1)
self.assertGreaterEqual(len(summary["top_raw_mismatches"]), 1)
def test_run_vosk_keystroke_eval_hard_fails_model_with_out_of_grammar_output(self):
with tempfile.TemporaryDirectory() as td:
root = Path(td)
literal_manifest = root / "literal.jsonl"
nato_manifest = root / "nato.jsonl"
intents_path = root / "intents.json"
output_dir = root / "out"
model_dir = root / "model"
model_dir.mkdir(parents=True, exist_ok=True)
wav_path = root / "sample.wav"
_write_silence_wav(wav_path, samplerate=16000, frames=800)
intents_path.write_text(
json.dumps(
[
{
"intent_id": "ctrl+d",
"literal_phrase": "control d",
"nato_phrase": "control delta",
"letter": "d",
"modifier": "ctrl",
}
]
),
encoding="utf-8",
)
literal_manifest.write_text(
json.dumps({"phrase": "control d", "wav_path": str(wav_path)}) + "\n",
encoding="utf-8",
)
nato_manifest.write_text(
json.dumps({"phrase": "control delta", "wav_path": str(wav_path)}) + "\n",
encoding="utf-8",
)
models_file = root / "models.json"
models_file.write_text(
json.dumps([{"name": "fake", "path": str(model_dir)}]),
encoding="utf-8",
)
class _FakeModel:
def __init__(self, _path: str):
return
class _FakeRecognizer:
def __init__(self, _model, _rate, _grammar_json):
return
def SetWords(self, _enabled: bool):
return
def AcceptWaveform(self, _payload: bytes):
return True
def FinalResult(self):
return json.dumps({"text": "outside hypothesis"})
with patch("vosk_eval._load_vosk_bindings", return_value=(_FakeModel, _FakeRecognizer)):
with self.assertRaisesRegex(RuntimeError, "out-of-grammar"):
run_vosk_keystroke_eval(
literal_manifest=literal_manifest,
nato_manifest=nato_manifest,
intents_path=intents_path,
output_dir=output_dir,
models_file=models_file,
verbose=False,
)
def test_run_vosk_keystroke_eval_resolves_manifest_relative_wav_paths(self):
with tempfile.TemporaryDirectory() as td:
root = Path(td)
manifests_dir = root / "manifests"
samples_dir = manifests_dir / "samples"
samples_dir.mkdir(parents=True, exist_ok=True)
wav_path = samples_dir / "sample.wav"
_write_silence_wav(wav_path, samplerate=16000, frames=800)
literal_manifest = manifests_dir / "literal.jsonl"
nato_manifest = manifests_dir / "nato.jsonl"
intents_path = root / "intents.json"
output_dir = root / "out"
model_dir = root / "model"
model_dir.mkdir(parents=True, exist_ok=True)
intents_path.write_text(
json.dumps(
[
{
"intent_id": "ctrl+d",
"literal_phrase": "control d",
"nato_phrase": "control delta",
"letter": "d",
"modifier": "ctrl",
}
]
),
encoding="utf-8",
)
relative_wav = "samples/sample.wav"
literal_manifest.write_text(
json.dumps({"phrase": "control d", "wav_path": relative_wav}) + "\n",
encoding="utf-8",
)
nato_manifest.write_text(
json.dumps({"phrase": "control delta", "wav_path": relative_wav}) + "\n",
encoding="utf-8",
)
models_file = root / "models.json"
models_file.write_text(
json.dumps([{"name": "fake", "path": str(model_dir)}]),
encoding="utf-8",
)
class _FakeModel:
def __init__(self, _path: str):
return
class _FakeRecognizer:
def __init__(self, _model, _rate, grammar_json):
phrases = json.loads(grammar_json)
self._text = str(phrases[0]) if phrases else ""
def SetWords(self, _enabled: bool):
return
def AcceptWaveform(self, _payload: bytes):
return True
def FinalResult(self):
return json.dumps({"text": self._text})
with patch("vosk_eval._load_vosk_bindings", return_value=(_FakeModel, _FakeRecognizer)):
summary = run_vosk_keystroke_eval(
literal_manifest=literal_manifest,
nato_manifest=nato_manifest,
intents_path=intents_path,
output_dir=output_dir,
models_file=models_file,
verbose=False,
)
self.assertEqual(summary["models"][0]["literal"]["intent_accuracy"], 1.0)
self.assertEqual(summary["models"][0]["nato"]["intent_accuracy"], 1.0)
def load_keystroke_intents_from_inline(
intent_id: str,
literal_phrase: str,
nato_phrase: str,
letter: str,
modifier: str,
):
return load_keystroke_intents_from_json(
[
{
"intent_id": intent_id,
"literal_phrase": literal_phrase,
"nato_phrase": nato_phrase,
"letter": letter,
"modifier": modifier,
}
]
)[0]
def load_keystroke_intents_from_json(payload):
with tempfile.TemporaryDirectory() as td:
path = Path(td) / "intents.json"
path.write_text(json.dumps(payload), encoding="utf-8")
return load_keystroke_intents(path)
def _write_silence_wav(path: Path, *, samplerate: int, frames: int):
path.parent.mkdir(parents=True, exist_ok=True)
with wave.open(str(path), "wb") as handle:
handle.setnchannels(1)
handle.setsampwidth(2)
handle.setframerate(samplerate)
handle.writeframes(b"\x00\x00" * frames)
if __name__ == "__main__":
unittest.main()

207
uv.lock generated
View file

@ -17,6 +17,7 @@ dependencies = [
{ name = "numpy", version = "2.4.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
{ name = "pillow" },
{ name = "sounddevice" },
{ name = "vosk" },
]
[package.optional-dependencies]
@ -34,6 +35,7 @@ requires-dist = [
{ name = "pygobject", marker = "extra == 'x11'" },
{ name = "python-xlib", marker = "extra == 'x11'" },
{ name = "sounddevice" },
{ name = "vosk", specifier = ">=0.3.45" },
]
provides-extras = ["x11", "wayland"]
@ -199,6 +201,95 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ae/3a/dbeec9d1ee0844c679f6bb5d6ad4e9f198b1224f4e7a32825f47f6192b0c/cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9", size = 184195, upload-time = "2025-09-08T23:23:43.004Z" },
]
[[package]]
name = "charset-normalizer"
version = "3.4.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/13/69/33ddede1939fdd074bce5434295f38fae7136463422fe4fd3e0e89b98062/charset_normalizer-3.4.4.tar.gz", hash = "sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a", size = 129418, upload-time = "2025-10-14T04:42:32.879Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1f/b8/6d51fc1d52cbd52cd4ccedd5b5b2f0f6a11bbf6765c782298b0f3e808541/charset_normalizer-3.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e824f1492727fa856dd6eda4f7cee25f8518a12f3c4a56a74e8095695089cf6d", size = 209709, upload-time = "2025-10-14T04:40:11.385Z" },
{ url = "https://files.pythonhosted.org/packages/5c/af/1f9d7f7faafe2ddfb6f72a2e07a548a629c61ad510fe60f9630309908fef/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4bd5d4137d500351a30687c2d3971758aac9a19208fc110ccb9d7188fbe709e8", size = 148814, upload-time = "2025-10-14T04:40:13.135Z" },
{ url = "https://files.pythonhosted.org/packages/79/3d/f2e3ac2bbc056ca0c204298ea4e3d9db9b4afe437812638759db2c976b5f/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:027f6de494925c0ab2a55eab46ae5129951638a49a34d87f4c3eda90f696b4ad", size = 144467, upload-time = "2025-10-14T04:40:14.728Z" },
{ url = "https://files.pythonhosted.org/packages/ec/85/1bf997003815e60d57de7bd972c57dc6950446a3e4ccac43bc3070721856/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f820802628d2694cb7e56db99213f930856014862f3fd943d290ea8438d07ca8", size = 162280, upload-time = "2025-10-14T04:40:16.14Z" },
{ url = "https://files.pythonhosted.org/packages/3e/8e/6aa1952f56b192f54921c436b87f2aaf7c7a7c3d0d1a765547d64fd83c13/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:798d75d81754988d2565bff1b97ba5a44411867c0cf32b77a7e8f8d84796b10d", size = 159454, upload-time = "2025-10-14T04:40:17.567Z" },
{ url = "https://files.pythonhosted.org/packages/36/3b/60cbd1f8e93aa25d1c669c649b7a655b0b5fb4c571858910ea9332678558/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9d1bb833febdff5c8927f922386db610b49db6e0d4f4ee29601d71e7c2694313", size = 153609, upload-time = "2025-10-14T04:40:19.08Z" },
{ url = "https://files.pythonhosted.org/packages/64/91/6a13396948b8fd3c4b4fd5bc74d045f5637d78c9675585e8e9fbe5636554/charset_normalizer-3.4.4-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:9cd98cdc06614a2f768d2b7286d66805f94c48cde050acdbbb7db2600ab3197e", size = 151849, upload-time = "2025-10-14T04:40:20.607Z" },
{ url = "https://files.pythonhosted.org/packages/b7/7a/59482e28b9981d105691e968c544cc0df3b7d6133152fb3dcdc8f135da7a/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:077fbb858e903c73f6c9db43374fd213b0b6a778106bc7032446a8e8b5b38b93", size = 151586, upload-time = "2025-10-14T04:40:21.719Z" },
{ url = "https://files.pythonhosted.org/packages/92/59/f64ef6a1c4bdd2baf892b04cd78792ed8684fbc48d4c2afe467d96b4df57/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:244bfb999c71b35de57821b8ea746b24e863398194a4014e4c76adc2bbdfeff0", size = 145290, upload-time = "2025-10-14T04:40:23.069Z" },
{ url = "https://files.pythonhosted.org/packages/6b/63/3bf9f279ddfa641ffa1962b0db6a57a9c294361cc2f5fcac997049a00e9c/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:64b55f9dce520635f018f907ff1b0df1fdc31f2795a922fb49dd14fbcdf48c84", size = 163663, upload-time = "2025-10-14T04:40:24.17Z" },
{ url = "https://files.pythonhosted.org/packages/ed/09/c9e38fc8fa9e0849b172b581fd9803bdf6e694041127933934184e19f8c3/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:faa3a41b2b66b6e50f84ae4a68c64fcd0c44355741c6374813a800cd6695db9e", size = 151964, upload-time = "2025-10-14T04:40:25.368Z" },
{ url = "https://files.pythonhosted.org/packages/d2/d1/d28b747e512d0da79d8b6a1ac18b7ab2ecfd81b2944c4c710e166d8dd09c/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:6515f3182dbe4ea06ced2d9e8666d97b46ef4c75e326b79bb624110f122551db", size = 161064, upload-time = "2025-10-14T04:40:26.806Z" },
{ url = "https://files.pythonhosted.org/packages/bb/9a/31d62b611d901c3b9e5500c36aab0ff5eb442043fb3a1c254200d3d397d9/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cc00f04ed596e9dc0da42ed17ac5e596c6ccba999ba6bd92b0e0aef2f170f2d6", size = 155015, upload-time = "2025-10-14T04:40:28.284Z" },
{ url = "https://files.pythonhosted.org/packages/1f/f3/107e008fa2bff0c8b9319584174418e5e5285fef32f79d8ee6a430d0039c/charset_normalizer-3.4.4-cp310-cp310-win32.whl", hash = "sha256:f34be2938726fc13801220747472850852fe6b1ea75869a048d6f896838c896f", size = 99792, upload-time = "2025-10-14T04:40:29.613Z" },
{ url = "https://files.pythonhosted.org/packages/eb/66/e396e8a408843337d7315bab30dbf106c38966f1819f123257f5520f8a96/charset_normalizer-3.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:a61900df84c667873b292c3de315a786dd8dac506704dea57bc957bd31e22c7d", size = 107198, upload-time = "2025-10-14T04:40:30.644Z" },
{ url = "https://files.pythonhosted.org/packages/b5/58/01b4f815bf0312704c267f2ccb6e5d42bcc7752340cd487bc9f8c3710597/charset_normalizer-3.4.4-cp310-cp310-win_arm64.whl", hash = "sha256:cead0978fc57397645f12578bfd2d5ea9138ea0fac82b2f63f7f7c6877986a69", size = 100262, upload-time = "2025-10-14T04:40:32.108Z" },
{ url = "https://files.pythonhosted.org/packages/ed/27/c6491ff4954e58a10f69ad90aca8a1b6fe9c5d3c6f380907af3c37435b59/charset_normalizer-3.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6e1fcf0720908f200cd21aa4e6750a48ff6ce4afe7ff5a79a90d5ed8a08296f8", size = 206988, upload-time = "2025-10-14T04:40:33.79Z" },
{ url = "https://files.pythonhosted.org/packages/94/59/2e87300fe67ab820b5428580a53cad894272dbb97f38a7a814a2a1ac1011/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5f819d5fe9234f9f82d75bdfa9aef3a3d72c4d24a6e57aeaebba32a704553aa0", size = 147324, upload-time = "2025-10-14T04:40:34.961Z" },
{ url = "https://files.pythonhosted.org/packages/07/fb/0cf61dc84b2b088391830f6274cb57c82e4da8bbc2efeac8c025edb88772/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:a59cb51917aa591b1c4e6a43c132f0cdc3c76dbad6155df4e28ee626cc77a0a3", size = 142742, upload-time = "2025-10-14T04:40:36.105Z" },
{ url = "https://files.pythonhosted.org/packages/62/8b/171935adf2312cd745d290ed93cf16cf0dfe320863ab7cbeeae1dcd6535f/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:8ef3c867360f88ac904fd3f5e1f902f13307af9052646963ee08ff4f131adafc", size = 160863, upload-time = "2025-10-14T04:40:37.188Z" },
{ url = "https://files.pythonhosted.org/packages/09/73/ad875b192bda14f2173bfc1bc9a55e009808484a4b256748d931b6948442/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d9e45d7faa48ee908174d8fe84854479ef838fc6a705c9315372eacbc2f02897", size = 157837, upload-time = "2025-10-14T04:40:38.435Z" },
{ url = "https://files.pythonhosted.org/packages/6d/fc/de9cce525b2c5b94b47c70a4b4fb19f871b24995c728e957ee68ab1671ea/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:840c25fb618a231545cbab0564a799f101b63b9901f2569faecd6b222ac72381", size = 151550, upload-time = "2025-10-14T04:40:40.053Z" },
{ url = "https://files.pythonhosted.org/packages/55/c2/43edd615fdfba8c6f2dfbd459b25a6b3b551f24ea21981e23fb768503ce1/charset_normalizer-3.4.4-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ca5862d5b3928c4940729dacc329aa9102900382fea192fc5e52eb69d6093815", size = 149162, upload-time = "2025-10-14T04:40:41.163Z" },
{ url = "https://files.pythonhosted.org/packages/03/86/bde4ad8b4d0e9429a4e82c1e8f5c659993a9a863ad62c7df05cf7b678d75/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d9c7f57c3d666a53421049053eaacdd14bbd0a528e2186fcb2e672effd053bb0", size = 150019, upload-time = "2025-10-14T04:40:42.276Z" },
{ url = "https://files.pythonhosted.org/packages/1f/86/a151eb2af293a7e7bac3a739b81072585ce36ccfb4493039f49f1d3cae8c/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:277e970e750505ed74c832b4bf75dac7476262ee2a013f5574dd49075879e161", size = 143310, upload-time = "2025-10-14T04:40:43.439Z" },
{ url = "https://files.pythonhosted.org/packages/b5/fe/43dae6144a7e07b87478fdfc4dbe9efd5defb0e7ec29f5f58a55aeef7bf7/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:31fd66405eaf47bb62e8cd575dc621c56c668f27d46a61d975a249930dd5e2a4", size = 162022, upload-time = "2025-10-14T04:40:44.547Z" },
{ url = "https://files.pythonhosted.org/packages/80/e6/7aab83774f5d2bca81f42ac58d04caf44f0cc2b65fc6db2b3b2e8a05f3b3/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:0d3d8f15c07f86e9ff82319b3d9ef6f4bf907608f53fe9d92b28ea9ae3d1fd89", size = 149383, upload-time = "2025-10-14T04:40:46.018Z" },
{ url = "https://files.pythonhosted.org/packages/4f/e8/b289173b4edae05c0dde07f69f8db476a0b511eac556dfe0d6bda3c43384/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:9f7fcd74d410a36883701fafa2482a6af2ff5ba96b9a620e9e0721e28ead5569", size = 159098, upload-time = "2025-10-14T04:40:47.081Z" },
{ url = "https://files.pythonhosted.org/packages/d8/df/fe699727754cae3f8478493c7f45f777b17c3ef0600e28abfec8619eb49c/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ebf3e58c7ec8a8bed6d66a75d7fb37b55e5015b03ceae72a8e7c74495551e224", size = 152991, upload-time = "2025-10-14T04:40:48.246Z" },
{ url = "https://files.pythonhosted.org/packages/1a/86/584869fe4ddb6ffa3bd9f491b87a01568797fb9bd8933f557dba9771beaf/charset_normalizer-3.4.4-cp311-cp311-win32.whl", hash = "sha256:eecbc200c7fd5ddb9a7f16c7decb07b566c29fa2161a16cf67b8d068bd21690a", size = 99456, upload-time = "2025-10-14T04:40:49.376Z" },
{ url = "https://files.pythonhosted.org/packages/65/f6/62fdd5feb60530f50f7e38b4f6a1d5203f4d16ff4f9f0952962c044e919a/charset_normalizer-3.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:5ae497466c7901d54b639cf42d5b8c1b6a4fead55215500d2f486d34db48d016", size = 106978, upload-time = "2025-10-14T04:40:50.844Z" },
{ url = "https://files.pythonhosted.org/packages/7a/9d/0710916e6c82948b3be62d9d398cb4fcf4e97b56d6a6aeccd66c4b2f2bd5/charset_normalizer-3.4.4-cp311-cp311-win_arm64.whl", hash = "sha256:65e2befcd84bc6f37095f5961e68a6f077bf44946771354a28ad434c2cce0ae1", size = 99969, upload-time = "2025-10-14T04:40:52.272Z" },
{ url = "https://files.pythonhosted.org/packages/f3/85/1637cd4af66fa687396e757dec650f28025f2a2f5a5531a3208dc0ec43f2/charset_normalizer-3.4.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0a98e6759f854bd25a58a73fa88833fba3b7c491169f86ce1180c948ab3fd394", size = 208425, upload-time = "2025-10-14T04:40:53.353Z" },
{ url = "https://files.pythonhosted.org/packages/9d/6a/04130023fef2a0d9c62d0bae2649b69f7b7d8d24ea5536feef50551029df/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b5b290ccc2a263e8d185130284f8501e3e36c5e02750fc6b6bdeb2e9e96f1e25", size = 148162, upload-time = "2025-10-14T04:40:54.558Z" },
{ url = "https://files.pythonhosted.org/packages/78/29/62328d79aa60da22c9e0b9a66539feae06ca0f5a4171ac4f7dc285b83688/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74bb723680f9f7a6234dcf67aea57e708ec1fbdf5699fb91dfd6f511b0a320ef", size = 144558, upload-time = "2025-10-14T04:40:55.677Z" },
{ url = "https://files.pythonhosted.org/packages/86/bb/b32194a4bf15b88403537c2e120b817c61cd4ecffa9b6876e941c3ee38fe/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f1e34719c6ed0b92f418c7c780480b26b5d9c50349e9a9af7d76bf757530350d", size = 161497, upload-time = "2025-10-14T04:40:57.217Z" },
{ url = "https://files.pythonhosted.org/packages/19/89/a54c82b253d5b9b111dc74aca196ba5ccfcca8242d0fb64146d4d3183ff1/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2437418e20515acec67d86e12bf70056a33abdacb5cb1655042f6538d6b085a8", size = 159240, upload-time = "2025-10-14T04:40:58.358Z" },
{ url = "https://files.pythonhosted.org/packages/c0/10/d20b513afe03acc89ec33948320a5544d31f21b05368436d580dec4e234d/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:11d694519d7f29d6cd09f6ac70028dba10f92f6cdd059096db198c283794ac86", size = 153471, upload-time = "2025-10-14T04:40:59.468Z" },
{ url = "https://files.pythonhosted.org/packages/61/fa/fbf177b55bdd727010f9c0a3c49eefa1d10f960e5f09d1d887bf93c2e698/charset_normalizer-3.4.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ac1c4a689edcc530fc9d9aa11f5774b9e2f33f9a0c6a57864e90908f5208d30a", size = 150864, upload-time = "2025-10-14T04:41:00.623Z" },
{ url = "https://files.pythonhosted.org/packages/05/12/9fbc6a4d39c0198adeebbde20b619790e9236557ca59fc40e0e3cebe6f40/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:21d142cc6c0ec30d2efee5068ca36c128a30b0f2c53c1c07bd78cb6bc1d3be5f", size = 150647, upload-time = "2025-10-14T04:41:01.754Z" },
{ url = "https://files.pythonhosted.org/packages/ad/1f/6a9a593d52e3e8c5d2b167daf8c6b968808efb57ef4c210acb907c365bc4/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:5dbe56a36425d26d6cfb40ce79c314a2e4dd6211d51d6d2191c00bed34f354cc", size = 145110, upload-time = "2025-10-14T04:41:03.231Z" },
{ url = "https://files.pythonhosted.org/packages/30/42/9a52c609e72471b0fc54386dc63c3781a387bb4fe61c20231a4ebcd58bdd/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5bfbb1b9acf3334612667b61bd3002196fe2a1eb4dd74d247e0f2a4d50ec9bbf", size = 162839, upload-time = "2025-10-14T04:41:04.715Z" },
{ url = "https://files.pythonhosted.org/packages/c4/5b/c0682bbf9f11597073052628ddd38344a3d673fda35a36773f7d19344b23/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:d055ec1e26e441f6187acf818b73564e6e6282709e9bcb5b63f5b23068356a15", size = 150667, upload-time = "2025-10-14T04:41:05.827Z" },
{ url = "https://files.pythonhosted.org/packages/e4/24/a41afeab6f990cf2daf6cb8c67419b63b48cf518e4f56022230840c9bfb2/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:af2d8c67d8e573d6de5bc30cdb27e9b95e49115cd9baad5ddbd1a6207aaa82a9", size = 160535, upload-time = "2025-10-14T04:41:06.938Z" },
{ url = "https://files.pythonhosted.org/packages/2a/e5/6a4ce77ed243c4a50a1fecca6aaaab419628c818a49434be428fe24c9957/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:780236ac706e66881f3b7f2f32dfe90507a09e67d1d454c762cf642e6e1586e0", size = 154816, upload-time = "2025-10-14T04:41:08.101Z" },
{ url = "https://files.pythonhosted.org/packages/a8/ef/89297262b8092b312d29cdb2517cb1237e51db8ecef2e9af5edbe7b683b1/charset_normalizer-3.4.4-cp312-cp312-win32.whl", hash = "sha256:5833d2c39d8896e4e19b689ffc198f08ea58116bee26dea51e362ecc7cd3ed26", size = 99694, upload-time = "2025-10-14T04:41:09.23Z" },
{ url = "https://files.pythonhosted.org/packages/3d/2d/1e5ed9dd3b3803994c155cd9aacb60c82c331bad84daf75bcb9c91b3295e/charset_normalizer-3.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:a79cfe37875f822425b89a82333404539ae63dbdddf97f84dcbc3d339aae9525", size = 107131, upload-time = "2025-10-14T04:41:10.467Z" },
{ url = "https://files.pythonhosted.org/packages/d0/d9/0ed4c7098a861482a7b6a95603edce4c0d9db2311af23da1fb2b75ec26fc/charset_normalizer-3.4.4-cp312-cp312-win_arm64.whl", hash = "sha256:376bec83a63b8021bb5c8ea75e21c4ccb86e7e45ca4eb81146091b56599b80c3", size = 100390, upload-time = "2025-10-14T04:41:11.915Z" },
{ url = "https://files.pythonhosted.org/packages/97/45/4b3a1239bbacd321068ea6e7ac28875b03ab8bc0aa0966452db17cd36714/charset_normalizer-3.4.4-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e1f185f86a6f3403aa2420e815904c67b2f9ebc443f045edd0de921108345794", size = 208091, upload-time = "2025-10-14T04:41:13.346Z" },
{ url = "https://files.pythonhosted.org/packages/7d/62/73a6d7450829655a35bb88a88fca7d736f9882a27eacdca2c6d505b57e2e/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b39f987ae8ccdf0d2642338faf2abb1862340facc796048b604ef14919e55ed", size = 147936, upload-time = "2025-10-14T04:41:14.461Z" },
{ url = "https://files.pythonhosted.org/packages/89/c5/adb8c8b3d6625bef6d88b251bbb0d95f8205831b987631ab0c8bb5d937c2/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3162d5d8ce1bb98dd51af660f2121c55d0fa541b46dff7bb9b9f86ea1d87de72", size = 144180, upload-time = "2025-10-14T04:41:15.588Z" },
{ url = "https://files.pythonhosted.org/packages/91/ed/9706e4070682d1cc219050b6048bfd293ccf67b3d4f5a4f39207453d4b99/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:81d5eb2a312700f4ecaa977a8235b634ce853200e828fbadf3a9c50bab278328", size = 161346, upload-time = "2025-10-14T04:41:16.738Z" },
{ url = "https://files.pythonhosted.org/packages/d5/0d/031f0d95e4972901a2f6f09ef055751805ff541511dc1252ba3ca1f80cf5/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5bd2293095d766545ec1a8f612559f6b40abc0eb18bb2f5d1171872d34036ede", size = 158874, upload-time = "2025-10-14T04:41:17.923Z" },
{ url = "https://files.pythonhosted.org/packages/f5/83/6ab5883f57c9c801ce5e5677242328aa45592be8a00644310a008d04f922/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a8a8b89589086a25749f471e6a900d3f662d1d3b6e2e59dcecf787b1cc3a1894", size = 153076, upload-time = "2025-10-14T04:41:19.106Z" },
{ url = "https://files.pythonhosted.org/packages/75/1e/5ff781ddf5260e387d6419959ee89ef13878229732732ee73cdae01800f2/charset_normalizer-3.4.4-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc7637e2f80d8530ee4a78e878bce464f70087ce73cf7c1caf142416923b98f1", size = 150601, upload-time = "2025-10-14T04:41:20.245Z" },
{ url = "https://files.pythonhosted.org/packages/d7/57/71be810965493d3510a6ca79b90c19e48696fb1ff964da319334b12677f0/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f8bf04158c6b607d747e93949aa60618b61312fe647a6369f88ce2ff16043490", size = 150376, upload-time = "2025-10-14T04:41:21.398Z" },
{ url = "https://files.pythonhosted.org/packages/e5/d5/c3d057a78c181d007014feb7e9f2e65905a6c4ef182c0ddf0de2924edd65/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:554af85e960429cf30784dd47447d5125aaa3b99a6f0683589dbd27e2f45da44", size = 144825, upload-time = "2025-10-14T04:41:22.583Z" },
{ url = "https://files.pythonhosted.org/packages/e6/8c/d0406294828d4976f275ffbe66f00266c4b3136b7506941d87c00cab5272/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:74018750915ee7ad843a774364e13a3db91682f26142baddf775342c3f5b1133", size = 162583, upload-time = "2025-10-14T04:41:23.754Z" },
{ url = "https://files.pythonhosted.org/packages/d7/24/e2aa1f18c8f15c4c0e932d9287b8609dd30ad56dbe41d926bd846e22fb8d/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:c0463276121fdee9c49b98908b3a89c39be45d86d1dbaa22957e38f6321d4ce3", size = 150366, upload-time = "2025-10-14T04:41:25.27Z" },
{ url = "https://files.pythonhosted.org/packages/e4/5b/1e6160c7739aad1e2df054300cc618b06bf784a7a164b0f238360721ab86/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:362d61fd13843997c1c446760ef36f240cf81d3ebf74ac62652aebaf7838561e", size = 160300, upload-time = "2025-10-14T04:41:26.725Z" },
{ url = "https://files.pythonhosted.org/packages/7a/10/f882167cd207fbdd743e55534d5d9620e095089d176d55cb22d5322f2afd/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9a26f18905b8dd5d685d6d07b0cdf98a79f3c7a918906af7cc143ea2e164c8bc", size = 154465, upload-time = "2025-10-14T04:41:28.322Z" },
{ url = "https://files.pythonhosted.org/packages/89/66/c7a9e1b7429be72123441bfdbaf2bc13faab3f90b933f664db506dea5915/charset_normalizer-3.4.4-cp313-cp313-win32.whl", hash = "sha256:9b35f4c90079ff2e2edc5b26c0c77925e5d2d255c42c74fdb70fb49b172726ac", size = 99404, upload-time = "2025-10-14T04:41:29.95Z" },
{ url = "https://files.pythonhosted.org/packages/c4/26/b9924fa27db384bdcd97ab83b4f0a8058d96ad9626ead570674d5e737d90/charset_normalizer-3.4.4-cp313-cp313-win_amd64.whl", hash = "sha256:b435cba5f4f750aa6c0a0d92c541fb79f69a387c91e61f1795227e4ed9cece14", size = 107092, upload-time = "2025-10-14T04:41:31.188Z" },
{ url = "https://files.pythonhosted.org/packages/af/8f/3ed4bfa0c0c72a7ca17f0380cd9e4dd842b09f664e780c13cff1dcf2ef1b/charset_normalizer-3.4.4-cp313-cp313-win_arm64.whl", hash = "sha256:542d2cee80be6f80247095cc36c418f7bddd14f4a6de45af91dfad36d817bba2", size = 100408, upload-time = "2025-10-14T04:41:32.624Z" },
{ url = "https://files.pythonhosted.org/packages/2a/35/7051599bd493e62411d6ede36fd5af83a38f37c4767b92884df7301db25d/charset_normalizer-3.4.4-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:da3326d9e65ef63a817ecbcc0df6e94463713b754fe293eaa03da99befb9a5bd", size = 207746, upload-time = "2025-10-14T04:41:33.773Z" },
{ url = "https://files.pythonhosted.org/packages/10/9a/97c8d48ef10d6cd4fcead2415523221624bf58bcf68a802721a6bc807c8f/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8af65f14dc14a79b924524b1e7fffe304517b2bff5a58bf64f30b98bbc5079eb", size = 147889, upload-time = "2025-10-14T04:41:34.897Z" },
{ url = "https://files.pythonhosted.org/packages/10/bf/979224a919a1b606c82bd2c5fa49b5c6d5727aa47b4312bb27b1734f53cd/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74664978bb272435107de04e36db5a9735e78232b85b77d45cfb38f758efd33e", size = 143641, upload-time = "2025-10-14T04:41:36.116Z" },
{ url = "https://files.pythonhosted.org/packages/ba/33/0ad65587441fc730dc7bd90e9716b30b4702dc7b617e6ba4997dc8651495/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:752944c7ffbfdd10c074dc58ec2d5a8a4cd9493b314d367c14d24c17684ddd14", size = 160779, upload-time = "2025-10-14T04:41:37.229Z" },
{ url = "https://files.pythonhosted.org/packages/67/ed/331d6b249259ee71ddea93f6f2f0a56cfebd46938bde6fcc6f7b9a3d0e09/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d1f13550535ad8cff21b8d757a3257963e951d96e20ec82ab44bc64aeb62a191", size = 159035, upload-time = "2025-10-14T04:41:38.368Z" },
{ url = "https://files.pythonhosted.org/packages/67/ff/f6b948ca32e4f2a4576aa129d8bed61f2e0543bf9f5f2b7fc3758ed005c9/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ecaae4149d99b1c9e7b88bb03e3221956f68fd6d50be2ef061b2381b61d20838", size = 152542, upload-time = "2025-10-14T04:41:39.862Z" },
{ url = "https://files.pythonhosted.org/packages/16/85/276033dcbcc369eb176594de22728541a925b2632f9716428c851b149e83/charset_normalizer-3.4.4-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:cb6254dc36b47a990e59e1068afacdcd02958bdcce30bb50cc1700a8b9d624a6", size = 149524, upload-time = "2025-10-14T04:41:41.319Z" },
{ url = "https://files.pythonhosted.org/packages/9e/f2/6a2a1f722b6aba37050e626530a46a68f74e63683947a8acff92569f979a/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c8ae8a0f02f57a6e61203a31428fa1d677cbe50c93622b4149d5c0f319c1d19e", size = 150395, upload-time = "2025-10-14T04:41:42.539Z" },
{ url = "https://files.pythonhosted.org/packages/60/bb/2186cb2f2bbaea6338cad15ce23a67f9b0672929744381e28b0592676824/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:47cc91b2f4dd2833fddaedd2893006b0106129d4b94fdb6af1f4ce5a9965577c", size = 143680, upload-time = "2025-10-14T04:41:43.661Z" },
{ url = "https://files.pythonhosted.org/packages/7d/a5/bf6f13b772fbb2a90360eb620d52ed8f796f3c5caee8398c3b2eb7b1c60d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:82004af6c302b5d3ab2cfc4cc5f29db16123b1a8417f2e25f9066f91d4411090", size = 162045, upload-time = "2025-10-14T04:41:44.821Z" },
{ url = "https://files.pythonhosted.org/packages/df/c5/d1be898bf0dc3ef9030c3825e5d3b83f2c528d207d246cbabe245966808d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:2b7d8f6c26245217bd2ad053761201e9f9680f8ce52f0fcd8d0755aeae5b2152", size = 149687, upload-time = "2025-10-14T04:41:46.442Z" },
{ url = "https://files.pythonhosted.org/packages/a5/42/90c1f7b9341eef50c8a1cb3f098ac43b0508413f33affd762855f67a410e/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:799a7a5e4fb2d5898c60b640fd4981d6a25f1c11790935a44ce38c54e985f828", size = 160014, upload-time = "2025-10-14T04:41:47.631Z" },
{ url = "https://files.pythonhosted.org/packages/76/be/4d3ee471e8145d12795ab655ece37baed0929462a86e72372fd25859047c/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:99ae2cffebb06e6c22bdc25801d7b30f503cc87dbd283479e7b606f70aff57ec", size = 154044, upload-time = "2025-10-14T04:41:48.81Z" },
{ url = "https://files.pythonhosted.org/packages/b0/6f/8f7af07237c34a1defe7defc565a9bc1807762f672c0fde711a4b22bf9c0/charset_normalizer-3.4.4-cp314-cp314-win32.whl", hash = "sha256:f9d332f8c2a2fcbffe1378594431458ddbef721c1769d78e2cbc06280d8155f9", size = 99940, upload-time = "2025-10-14T04:41:49.946Z" },
{ url = "https://files.pythonhosted.org/packages/4b/51/8ade005e5ca5b0d80fb4aff72a3775b325bdc3d27408c8113811a7cbe640/charset_normalizer-3.4.4-cp314-cp314-win_amd64.whl", hash = "sha256:8a6562c3700cce886c5be75ade4a5db4214fda19fede41d9792d100288d8f94c", size = 107104, upload-time = "2025-10-14T04:41:51.051Z" },
{ url = "https://files.pythonhosted.org/packages/da/5f/6b8f83a55bb8278772c5ae54a577f3099025f9ade59d0136ac24a0df4bde/charset_normalizer-3.4.4-cp314-cp314-win_arm64.whl", hash = "sha256:de00632ca48df9daf77a2c65a484531649261ec9f25489917f09e455cb09ddb2", size = 100743, upload-time = "2025-10-14T04:41:52.122Z" },
{ url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" },
]
[[package]]
name = "click"
version = "8.3.1"
@ -964,6 +1055,21 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
]
[[package]]
name = "requests"
version = "2.32.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "charset-normalizer" },
{ name = "idna" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" },
]
[[package]]
name = "setuptools"
version = "82.0.0"
@ -1007,6 +1113,12 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4e/39/a61d4b83a7746b70d23d9173be688c0c6bfc7173772344b7442c2c155497/sounddevice-0.5.5-py3-none-win_arm64.whl", hash = "sha256:3861901ddd8230d2e0e8ae62ac320cdd4c688d81df89da036dcb812f757bb3e6", size = 317115, upload-time = "2026-01-23T18:36:42.235Z" },
]
[[package]]
name = "srt"
version = "3.5.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/66/b7/4a1bc231e0681ebf339337b0cd05b91dc6a0d701fa852bb812e244b7a030/srt-3.5.3.tar.gz", hash = "sha256:4884315043a4f0740fd1f878ed6caa376ac06d70e135f306a6dc44632eed0cc0", size = 28296, upload-time = "2023-03-28T02:35:44.007Z" }
[[package]]
name = "sympy"
version = "1.14.0"
@ -1082,3 +1194,98 @@ sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac8
wheels = [
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
]
[[package]]
name = "urllib3"
version = "2.6.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c7/24/5f1b3bdffd70275f6661c76461e25f024d5a38a46f04aaca912426a2b1d3/urllib3-2.6.3.tar.gz", hash = "sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed", size = 435556, upload-time = "2026-01-07T16:24:43.925Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/39/08/aaaad47bc4e9dc8c725e68f9d04865dbcb2052843ff09c97b08904852d84/urllib3-2.6.3-py3-none-any.whl", hash = "sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4", size = 131584, upload-time = "2026-01-07T16:24:42.685Z" },
]
[[package]]
name = "vosk"
version = "0.3.45"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cffi" },
{ name = "requests" },
{ name = "srt" },
{ name = "tqdm" },
{ name = "websockets" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/32/6d/728d89a4fe8d0573193eb84761b6a55e25690bac91e5bbf30308c7f80051/vosk-0.3.45-py3-none-linux_armv7l.whl", hash = "sha256:4221f83287eefe5abbe54fc6f1da5774e9e3ffcbbdca1705a466b341093b072e", size = 2388263, upload-time = "2022-12-14T23:13:34.467Z" },
{ url = "https://files.pythonhosted.org/packages/a4/23/3130a69fa0bf4f5566a52e415c18cd854bf561547bb6505666a6eb1bb625/vosk-0.3.45-py3-none-manylinux2014_aarch64.whl", hash = "sha256:54efb47dd890e544e9e20f0316413acec7f8680d04ec095c6140ab4e70262704", size = 2368543, upload-time = "2022-12-14T23:13:25.876Z" },
{ url = "https://files.pythonhosted.org/packages/fc/ca/83398cfcd557360a3d7b2d732aee1c5f6999f68618d1645f38d53e14c9ff/vosk-0.3.45-py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:25e025093c4399d7278f543568ed8cc5460ac3a4bf48c23673ace1e25d26619f", size = 7173758, upload-time = "2022-12-14T23:13:28.513Z" },
{ url = "https://files.pythonhosted.org/packages/c0/4c/deb0861f7da9696f8a255f1731bb73e9412cca29c4b3888a3fcb2a930a59/vosk-0.3.45-py3-none-win_amd64.whl", hash = "sha256:6994ddc68556c7e5730c3b6f6bad13320e3519b13ce3ed2aa25a86724e7c10ac", size = 13997596, upload-time = "2022-12-14T23:13:31.15Z" },
]
[[package]]
name = "websockets"
version = "16.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/04/24/4b2031d72e840ce4c1ccb255f693b15c334757fc50023e4db9537080b8c4/websockets-16.0.tar.gz", hash = "sha256:5f6261a5e56e8d5c42a4497b364ea24d94d9563e8fbd44e78ac40879c60179b5", size = 179346, upload-time = "2026-01-10T09:23:47.181Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/20/74/221f58decd852f4b59cc3354cccaf87e8ef695fede361d03dc9a7396573b/websockets-16.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:04cdd5d2d1dacbad0a7bf36ccbcd3ccd5a30ee188f2560b7a62a30d14107b31a", size = 177343, upload-time = "2026-01-10T09:22:21.28Z" },
{ url = "https://files.pythonhosted.org/packages/19/0f/22ef6107ee52ab7f0b710d55d36f5a5d3ef19e8a205541a6d7ffa7994e5a/websockets-16.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8ff32bb86522a9e5e31439a58addbb0166f0204d64066fb955265c4e214160f0", size = 175021, upload-time = "2026-01-10T09:22:22.696Z" },
{ url = "https://files.pythonhosted.org/packages/10/40/904a4cb30d9b61c0e278899bf36342e9b0208eb3c470324a9ecbaac2a30f/websockets-16.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:583b7c42688636f930688d712885cf1531326ee05effd982028212ccc13e5957", size = 175320, upload-time = "2026-01-10T09:22:23.94Z" },
{ url = "https://files.pythonhosted.org/packages/9d/2f/4b3ca7e106bc608744b1cdae041e005e446124bebb037b18799c2d356864/websockets-16.0-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7d837379b647c0c4c2355c2499723f82f1635fd2c26510e1f587d89bc2199e72", size = 183815, upload-time = "2026-01-10T09:22:25.469Z" },
{ url = "https://files.pythonhosted.org/packages/86/26/d40eaa2a46d4302becec8d15b0fc5e45bdde05191e7628405a19cf491ccd/websockets-16.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:df57afc692e517a85e65b72e165356ed1df12386ecb879ad5693be08fac65dde", size = 185054, upload-time = "2026-01-10T09:22:27.101Z" },
{ url = "https://files.pythonhosted.org/packages/b0/ba/6500a0efc94f7373ee8fefa8c271acdfd4dca8bd49a90d4be7ccabfc397e/websockets-16.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:2b9f1e0d69bc60a4a87349d50c09a037a2607918746f07de04df9e43252c77a3", size = 184565, upload-time = "2026-01-10T09:22:28.293Z" },
{ url = "https://files.pythonhosted.org/packages/04/b4/96bf2cee7c8d8102389374a2616200574f5f01128d1082f44102140344cc/websockets-16.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:335c23addf3d5e6a8633f9f8eda77efad001671e80b95c491dd0924587ece0b3", size = 183848, upload-time = "2026-01-10T09:22:30.394Z" },
{ url = "https://files.pythonhosted.org/packages/02/8e/81f40fb00fd125357814e8c3025738fc4ffc3da4b6b4a4472a82ba304b41/websockets-16.0-cp310-cp310-win32.whl", hash = "sha256:37b31c1623c6605e4c00d466c9d633f9b812ea430c11c8a278774a1fde1acfa9", size = 178249, upload-time = "2026-01-10T09:22:32.083Z" },
{ url = "https://files.pythonhosted.org/packages/b4/5f/7e40efe8df57db9b91c88a43690ac66f7b7aa73a11aa6a66b927e44f26fa/websockets-16.0-cp310-cp310-win_amd64.whl", hash = "sha256:8e1dab317b6e77424356e11e99a432b7cb2f3ec8c5ab4dabbcee6add48f72b35", size = 178685, upload-time = "2026-01-10T09:22:33.345Z" },
{ url = "https://files.pythonhosted.org/packages/f2/db/de907251b4ff46ae804ad0409809504153b3f30984daf82a1d84a9875830/websockets-16.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:31a52addea25187bde0797a97d6fc3d2f92b6f72a9370792d65a6e84615ac8a8", size = 177340, upload-time = "2026-01-10T09:22:34.539Z" },
{ url = "https://files.pythonhosted.org/packages/f3/fa/abe89019d8d8815c8781e90d697dec52523fb8ebe308bf11664e8de1877e/websockets-16.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:417b28978cdccab24f46400586d128366313e8a96312e4b9362a4af504f3bbad", size = 175022, upload-time = "2026-01-10T09:22:36.332Z" },
{ url = "https://files.pythonhosted.org/packages/58/5d/88ea17ed1ded2079358b40d31d48abe90a73c9e5819dbcde1606e991e2ad/websockets-16.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:af80d74d4edfa3cb9ed973a0a5ba2b2a549371f8a741e0800cb07becdd20f23d", size = 175319, upload-time = "2026-01-10T09:22:37.602Z" },
{ url = "https://files.pythonhosted.org/packages/d2/ae/0ee92b33087a33632f37a635e11e1d99d429d3d323329675a6022312aac2/websockets-16.0-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:08d7af67b64d29823fed316505a89b86705f2b7981c07848fb5e3ea3020c1abe", size = 184631, upload-time = "2026-01-10T09:22:38.789Z" },
{ url = "https://files.pythonhosted.org/packages/c8/c5/27178df583b6c5b31b29f526ba2da5e2f864ecc79c99dae630a85d68c304/websockets-16.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7be95cfb0a4dae143eaed2bcba8ac23f4892d8971311f1b06f3c6b78952ee70b", size = 185870, upload-time = "2026-01-10T09:22:39.893Z" },
{ url = "https://files.pythonhosted.org/packages/87/05/536652aa84ddc1c018dbb7e2c4cbcd0db884580bf8e95aece7593fde526f/websockets-16.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d6297ce39ce5c2e6feb13c1a996a2ded3b6832155fcfc920265c76f24c7cceb5", size = 185361, upload-time = "2026-01-10T09:22:41.016Z" },
{ url = "https://files.pythonhosted.org/packages/6d/e2/d5332c90da12b1e01f06fb1b85c50cfc489783076547415bf9f0a659ec19/websockets-16.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1c1b30e4f497b0b354057f3467f56244c603a79c0d1dafce1d16c283c25f6e64", size = 184615, upload-time = "2026-01-10T09:22:42.442Z" },
{ url = "https://files.pythonhosted.org/packages/77/fb/d3f9576691cae9253b51555f841bc6600bf0a983a461c79500ace5a5b364/websockets-16.0-cp311-cp311-win32.whl", hash = "sha256:5f451484aeb5cafee1ccf789b1b66f535409d038c56966d6101740c1614b86c6", size = 178246, upload-time = "2026-01-10T09:22:43.654Z" },
{ url = "https://files.pythonhosted.org/packages/54/67/eaff76b3dbaf18dcddabc3b8c1dba50b483761cccff67793897945b37408/websockets-16.0-cp311-cp311-win_amd64.whl", hash = "sha256:8d7f0659570eefb578dacde98e24fb60af35350193e4f56e11190787bee77dac", size = 178684, upload-time = "2026-01-10T09:22:44.941Z" },
{ url = "https://files.pythonhosted.org/packages/84/7b/bac442e6b96c9d25092695578dda82403c77936104b5682307bd4deb1ad4/websockets-16.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:71c989cbf3254fbd5e84d3bff31e4da39c43f884e64f2551d14bb3c186230f00", size = 177365, upload-time = "2026-01-10T09:22:46.787Z" },
{ url = "https://files.pythonhosted.org/packages/b0/fe/136ccece61bd690d9c1f715baaeefd953bb2360134de73519d5df19d29ca/websockets-16.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:8b6e209ffee39ff1b6d0fa7bfef6de950c60dfb91b8fcead17da4ee539121a79", size = 175038, upload-time = "2026-01-10T09:22:47.999Z" },
{ url = "https://files.pythonhosted.org/packages/40/1e/9771421ac2286eaab95b8575b0cb701ae3663abf8b5e1f64f1fd90d0a673/websockets-16.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:86890e837d61574c92a97496d590968b23c2ef0aeb8a9bc9421d174cd378ae39", size = 175328, upload-time = "2026-01-10T09:22:49.809Z" },
{ url = "https://files.pythonhosted.org/packages/18/29/71729b4671f21e1eaa5d6573031ab810ad2936c8175f03f97f3ff164c802/websockets-16.0-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:9b5aca38b67492ef518a8ab76851862488a478602229112c4b0d58d63a7a4d5c", size = 184915, upload-time = "2026-01-10T09:22:51.071Z" },
{ url = "https://files.pythonhosted.org/packages/97/bb/21c36b7dbbafc85d2d480cd65df02a1dc93bf76d97147605a8e27ff9409d/websockets-16.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e0334872c0a37b606418ac52f6ab9cfd17317ac26365f7f65e203e2d0d0d359f", size = 186152, upload-time = "2026-01-10T09:22:52.224Z" },
{ url = "https://files.pythonhosted.org/packages/4a/34/9bf8df0c0cf88fa7bfe36678dc7b02970c9a7d5e065a3099292db87b1be2/websockets-16.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a0b31e0b424cc6b5a04b8838bbaec1688834b2383256688cf47eb97412531da1", size = 185583, upload-time = "2026-01-10T09:22:53.443Z" },
{ url = "https://files.pythonhosted.org/packages/47/88/4dd516068e1a3d6ab3c7c183288404cd424a9a02d585efbac226cb61ff2d/websockets-16.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:485c49116d0af10ac698623c513c1cc01c9446c058a4e61e3bf6c19dff7335a2", size = 184880, upload-time = "2026-01-10T09:22:55.033Z" },
{ url = "https://files.pythonhosted.org/packages/91/d6/7d4553ad4bf1c0421e1ebd4b18de5d9098383b5caa1d937b63df8d04b565/websockets-16.0-cp312-cp312-win32.whl", hash = "sha256:eaded469f5e5b7294e2bdca0ab06becb6756ea86894a47806456089298813c89", size = 178261, upload-time = "2026-01-10T09:22:56.251Z" },
{ url = "https://files.pythonhosted.org/packages/c3/f0/f3a17365441ed1c27f850a80b2bc680a0fa9505d733fe152fdf5e98c1c0b/websockets-16.0-cp312-cp312-win_amd64.whl", hash = "sha256:5569417dc80977fc8c2d43a86f78e0a5a22fee17565d78621b6bb264a115d4ea", size = 178693, upload-time = "2026-01-10T09:22:57.478Z" },
{ url = "https://files.pythonhosted.org/packages/cc/9c/baa8456050d1c1b08dd0ec7346026668cbc6f145ab4e314d707bb845bf0d/websockets-16.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:878b336ac47938b474c8f982ac2f7266a540adc3fa4ad74ae96fea9823a02cc9", size = 177364, upload-time = "2026-01-10T09:22:59.333Z" },
{ url = "https://files.pythonhosted.org/packages/7e/0c/8811fc53e9bcff68fe7de2bcbe75116a8d959ac699a3200f4847a8925210/websockets-16.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:52a0fec0e6c8d9a784c2c78276a48a2bdf099e4ccc2a4cad53b27718dbfd0230", size = 175039, upload-time = "2026-01-10T09:23:01.171Z" },
{ url = "https://files.pythonhosted.org/packages/aa/82/39a5f910cb99ec0b59e482971238c845af9220d3ab9fa76dd9162cda9d62/websockets-16.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e6578ed5b6981005df1860a56e3617f14a6c307e6a71b4fff8c48fdc50f3ed2c", size = 175323, upload-time = "2026-01-10T09:23:02.341Z" },
{ url = "https://files.pythonhosted.org/packages/bd/28/0a25ee5342eb5d5f297d992a77e56892ecb65e7854c7898fb7d35e9b33bd/websockets-16.0-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:95724e638f0f9c350bb1c2b0a7ad0e83d9cc0c9259f3ea94e40d7b02a2179ae5", size = 184975, upload-time = "2026-01-10T09:23:03.756Z" },
{ url = "https://files.pythonhosted.org/packages/f9/66/27ea52741752f5107c2e41fda05e8395a682a1e11c4e592a809a90c6a506/websockets-16.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c0204dc62a89dc9d50d682412c10b3542d748260d743500a85c13cd1ee4bde82", size = 186203, upload-time = "2026-01-10T09:23:05.01Z" },
{ url = "https://files.pythonhosted.org/packages/37/e5/8e32857371406a757816a2b471939d51c463509be73fa538216ea52b792a/websockets-16.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:52ac480f44d32970d66763115edea932f1c5b1312de36df06d6b219f6741eed8", size = 185653, upload-time = "2026-01-10T09:23:06.301Z" },
{ url = "https://files.pythonhosted.org/packages/9b/67/f926bac29882894669368dc73f4da900fcdf47955d0a0185d60103df5737/websockets-16.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6e5a82b677f8f6f59e8dfc34ec06ca6b5b48bc4fcda346acd093694cc2c24d8f", size = 184920, upload-time = "2026-01-10T09:23:07.492Z" },
{ url = "https://files.pythonhosted.org/packages/3c/a1/3d6ccdcd125b0a42a311bcd15a7f705d688f73b2a22d8cf1c0875d35d34a/websockets-16.0-cp313-cp313-win32.whl", hash = "sha256:abf050a199613f64c886ea10f38b47770a65154dc37181bfaff70c160f45315a", size = 178255, upload-time = "2026-01-10T09:23:09.245Z" },
{ url = "https://files.pythonhosted.org/packages/6b/ae/90366304d7c2ce80f9b826096a9e9048b4bb760e44d3b873bb272cba696b/websockets-16.0-cp313-cp313-win_amd64.whl", hash = "sha256:3425ac5cf448801335d6fdc7ae1eb22072055417a96cc6b31b3861f455fbc156", size = 178689, upload-time = "2026-01-10T09:23:10.483Z" },
{ url = "https://files.pythonhosted.org/packages/f3/1d/e88022630271f5bd349ed82417136281931e558d628dd52c4d8621b4a0b2/websockets-16.0-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:8cc451a50f2aee53042ac52d2d053d08bf89bcb31ae799cb4487587661c038a0", size = 177406, upload-time = "2026-01-10T09:23:12.178Z" },
{ url = "https://files.pythonhosted.org/packages/f2/78/e63be1bf0724eeb4616efb1ae1c9044f7c3953b7957799abb5915bffd38e/websockets-16.0-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:daa3b6ff70a9241cf6c7fc9e949d41232d9d7d26fd3522b1ad2b4d62487e9904", size = 175085, upload-time = "2026-01-10T09:23:13.511Z" },
{ url = "https://files.pythonhosted.org/packages/bb/f4/d3c9220d818ee955ae390cf319a7c7a467beceb24f05ee7aaaa2414345ba/websockets-16.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:fd3cb4adb94a2a6e2b7c0d8d05cb94e6f1c81a0cf9dc2694fb65c7e8d94c42e4", size = 175328, upload-time = "2026-01-10T09:23:14.727Z" },
{ url = "https://files.pythonhosted.org/packages/63/bc/d3e208028de777087e6fb2b122051a6ff7bbcca0d6df9d9c2bf1dd869ae9/websockets-16.0-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:781caf5e8eee67f663126490c2f96f40906594cb86b408a703630f95550a8c3e", size = 185044, upload-time = "2026-01-10T09:23:15.939Z" },
{ url = "https://files.pythonhosted.org/packages/ad/6e/9a0927ac24bd33a0a9af834d89e0abc7cfd8e13bed17a86407a66773cc0e/websockets-16.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:caab51a72c51973ca21fa8a18bd8165e1a0183f1ac7066a182ff27107b71e1a4", size = 186279, upload-time = "2026-01-10T09:23:17.148Z" },
{ url = "https://files.pythonhosted.org/packages/b9/ca/bf1c68440d7a868180e11be653c85959502efd3a709323230314fda6e0b3/websockets-16.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:19c4dc84098e523fd63711e563077d39e90ec6702aff4b5d9e344a60cb3c0cb1", size = 185711, upload-time = "2026-01-10T09:23:18.372Z" },
{ url = "https://files.pythonhosted.org/packages/c4/f8/fdc34643a989561f217bb477cbc47a3a07212cbda91c0e4389c43c296ebf/websockets-16.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:a5e18a238a2b2249c9a9235466b90e96ae4795672598a58772dd806edc7ac6d3", size = 184982, upload-time = "2026-01-10T09:23:19.652Z" },
{ url = "https://files.pythonhosted.org/packages/dd/d1/574fa27e233764dbac9c52730d63fcf2823b16f0856b3329fc6268d6ae4f/websockets-16.0-cp314-cp314-win32.whl", hash = "sha256:a069d734c4a043182729edd3e9f247c3b2a4035415a9172fd0f1b71658a320a8", size = 177915, upload-time = "2026-01-10T09:23:21.458Z" },
{ url = "https://files.pythonhosted.org/packages/8a/f1/ae6b937bf3126b5134ce1f482365fde31a357c784ac51852978768b5eff4/websockets-16.0-cp314-cp314-win_amd64.whl", hash = "sha256:c0ee0e63f23914732c6d7e0cce24915c48f3f1512ec1d079ed01fc629dab269d", size = 178381, upload-time = "2026-01-10T09:23:22.715Z" },
{ url = "https://files.pythonhosted.org/packages/06/9b/f791d1db48403e1f0a27577a6beb37afae94254a8c6f08be4a23e4930bc0/websockets-16.0-cp314-cp314t-macosx_10_15_universal2.whl", hash = "sha256:a35539cacc3febb22b8f4d4a99cc79b104226a756aa7400adc722e83b0d03244", size = 177737, upload-time = "2026-01-10T09:23:24.523Z" },
{ url = "https://files.pythonhosted.org/packages/bd/40/53ad02341fa33b3ce489023f635367a4ac98b73570102ad2cdd770dacc9a/websockets-16.0-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:b784ca5de850f4ce93ec85d3269d24d4c82f22b7212023c974c401d4980ebc5e", size = 175268, upload-time = "2026-01-10T09:23:25.781Z" },
{ url = "https://files.pythonhosted.org/packages/74/9b/6158d4e459b984f949dcbbb0c5d270154c7618e11c01029b9bbd1bb4c4f9/websockets-16.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:569d01a4e7fba956c5ae4fc988f0d4e187900f5497ce46339c996dbf24f17641", size = 175486, upload-time = "2026-01-10T09:23:27.033Z" },
{ url = "https://files.pythonhosted.org/packages/e5/2d/7583b30208b639c8090206f95073646c2c9ffd66f44df967981a64f849ad/websockets-16.0-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:50f23cdd8343b984957e4077839841146f67a3d31ab0d00e6b824e74c5b2f6e8", size = 185331, upload-time = "2026-01-10T09:23:28.259Z" },
{ url = "https://files.pythonhosted.org/packages/45/b0/cce3784eb519b7b5ad680d14b9673a31ab8dcb7aad8b64d81709d2430aa8/websockets-16.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:152284a83a00c59b759697b7f9e9cddf4e3c7861dd0d964b472b70f78f89e80e", size = 186501, upload-time = "2026-01-10T09:23:29.449Z" },
{ url = "https://files.pythonhosted.org/packages/19/60/b8ebe4c7e89fb5f6cdf080623c9d92789a53636950f7abacfc33fe2b3135/websockets-16.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:bc59589ab64b0022385f429b94697348a6a234e8ce22544e3681b2e9331b5944", size = 186062, upload-time = "2026-01-10T09:23:31.368Z" },
{ url = "https://files.pythonhosted.org/packages/88/a8/a080593f89b0138b6cba1b28f8df5673b5506f72879322288b031337c0b8/websockets-16.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:32da954ffa2814258030e5a57bc73a3635463238e797c7375dc8091327434206", size = 185356, upload-time = "2026-01-10T09:23:32.627Z" },
{ url = "https://files.pythonhosted.org/packages/c2/b6/b9afed2afadddaf5ebb2afa801abf4b0868f42f8539bfe4b071b5266c9fe/websockets-16.0-cp314-cp314t-win32.whl", hash = "sha256:5a4b4cc550cb665dd8a47f868c8d04c8230f857363ad3c9caf7a0c3bf8c61ca6", size = 178085, upload-time = "2026-01-10T09:23:33.816Z" },
{ url = "https://files.pythonhosted.org/packages/9f/3e/28135a24e384493fa804216b79a6a6759a38cc4ff59118787b9fb693df93/websockets-16.0-cp314-cp314t-win_amd64.whl", hash = "sha256:b14dc141ed6d2dde437cddb216004bcac6a1df0935d79656387bd41632ba0bbd", size = 178531, upload-time = "2026-01-10T09:23:35.016Z" },
{ url = "https://files.pythonhosted.org/packages/72/07/c98a68571dcf256e74f1f816b8cc5eae6eb2d3d5cfa44d37f801619d9166/websockets-16.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:349f83cd6c9a415428ee1005cadb5c2c56f4389bc06a9af16103c3bc3dcc8b7d", size = 174947, upload-time = "2026-01-10T09:23:36.166Z" },
{ url = "https://files.pythonhosted.org/packages/7e/52/93e166a81e0305b33fe416338be92ae863563fe7bce446b0f687b9df5aea/websockets-16.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:4a1aba3340a8dca8db6eb5a7986157f52eb9e436b74813764241981ca4888f03", size = 175260, upload-time = "2026-01-10T09:23:37.409Z" },
{ url = "https://files.pythonhosted.org/packages/56/0c/2dbf513bafd24889d33de2ff0368190a0e69f37bcfa19009ef819fe4d507/websockets-16.0-pp311-pypy311_pp73-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f4a32d1bd841d4bcbffdcb3d2ce50c09c3909fbead375ab28d0181af89fd04da", size = 176071, upload-time = "2026-01-10T09:23:39.158Z" },
{ url = "https://files.pythonhosted.org/packages/a5/8f/aea9c71cc92bf9b6cc0f7f70df8f0b420636b6c96ef4feee1e16f80f75dd/websockets-16.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0298d07ee155e2e9fda5be8a9042200dd2e3bb0b8a38482156576f863a9d457c", size = 176968, upload-time = "2026-01-10T09:23:41.031Z" },
{ url = "https://files.pythonhosted.org/packages/9a/3f/f70e03f40ffc9a30d817eef7da1be72ee4956ba8d7255c399a01b135902a/websockets-16.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:a653aea902e0324b52f1613332ddf50b00c06fdaf7e92624fbf8c77c78fa5767", size = 178735, upload-time = "2026-01-10T09:23:42.259Z" },
{ url = "https://files.pythonhosted.org/packages/6f/28/258ebab549c2bf3e64d2b0217b973467394a9cea8c42f70418ca2c5d0d2e/websockets-16.0-py3-none-any.whl", hash = "sha256:1637db62fad1dc833276dded54215f2c7fa46912301a24bd94d45d46a011ceec", size = 171598, upload-time = "2026-01-10T09:23:45.395Z" },
]