Harden runtime diagnostics for milestone 3
Make the milestone 3 runtime story predictable instead of treating doctor, self-check, and startup failures as loosely related surfaces. Split doctor and self-check into distinct read-only flows, add tri-state diagnostic status with stable IDs and next steps, and reuse that wording in CLI output, service logs, and tray-triggered diagnostics. Add non-mutating config/model probes, a make runtime-check gate, and public recovery/validation docs for the X11 GA roadmap. Validation: make runtime-check; PYTHONPATH=src python3 -m unittest discover -s tests -p 'test_*.py'; python3 -m py_compile src/*.py tests/*.py; PYTHONPATH=src python3 -m aman doctor --help; PYTHONPATH=src python3 -m aman self-check --help. Leave milestone 3 open in the roadmap until the manual X11 validation rows are filled.
This commit is contained in:
parent
a3368056ff
commit
ed1b59240b
16 changed files with 1298 additions and 248 deletions
6
Makefile
6
Makefile
|
|
@ -6,7 +6,7 @@ BUILD_DIR := $(CURDIR)/build
|
||||||
RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
|
RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
|
||||||
RUN_CONFIG := $(if $(RUN_ARGS),$(abspath $(firstword $(RUN_ARGS))),$(CONFIG))
|
RUN_CONFIG := $(if $(RUN_ARGS),$(abspath $(firstword $(RUN_ARGS))),$(CONFIG))
|
||||||
|
|
||||||
.PHONY: run doctor self-check eval-models build-heuristic-dataset sync-default-model check-default-model sync test check build package package-deb package-arch package-portable release-check install-local install-service install clean-dist clean-build clean
|
.PHONY: run doctor self-check runtime-check eval-models build-heuristic-dataset sync-default-model check-default-model sync test check build package package-deb package-arch package-portable release-check install-local install-service install clean-dist clean-build clean
|
||||||
EVAL_DATASET ?= $(CURDIR)/benchmarks/cleanup_dataset.jsonl
|
EVAL_DATASET ?= $(CURDIR)/benchmarks/cleanup_dataset.jsonl
|
||||||
EVAL_MATRIX ?= $(CURDIR)/benchmarks/model_matrix.small_first.json
|
EVAL_MATRIX ?= $(CURDIR)/benchmarks/model_matrix.small_first.json
|
||||||
EVAL_OUTPUT ?= $(CURDIR)/benchmarks/results/latest.json
|
EVAL_OUTPUT ?= $(CURDIR)/benchmarks/results/latest.json
|
||||||
|
|
@ -31,6 +31,9 @@ doctor:
|
||||||
self-check:
|
self-check:
|
||||||
uv run aman self-check --config $(CONFIG)
|
uv run aman self-check --config $(CONFIG)
|
||||||
|
|
||||||
|
runtime-check:
|
||||||
|
$(PYTHON) -m unittest tests.test_diagnostics tests.test_aman_cli tests.test_aman tests.test_aiprocess
|
||||||
|
|
||||||
build-heuristic-dataset:
|
build-heuristic-dataset:
|
||||||
uv run aman build-heuristic-dataset --input $(EVAL_HEURISTIC_RAW) --output $(EVAL_HEURISTIC_DATASET)
|
uv run aman build-heuristic-dataset --input $(EVAL_HEURISTIC_RAW) --output $(EVAL_HEURISTIC_DATASET)
|
||||||
|
|
||||||
|
|
@ -70,6 +73,7 @@ package-portable:
|
||||||
release-check:
|
release-check:
|
||||||
$(MAKE) check-default-model
|
$(MAKE) check-default-model
|
||||||
$(PYTHON) -m py_compile src/*.py tests/*.py
|
$(PYTHON) -m py_compile src/*.py tests/*.py
|
||||||
|
$(MAKE) runtime-check
|
||||||
$(MAKE) test
|
$(MAKE) test
|
||||||
$(MAKE) build
|
$(MAKE) build
|
||||||
|
|
||||||
|
|
|
||||||
29
README.md
29
README.md
|
|
@ -103,6 +103,31 @@ When Aman does not behave as expected, use this order:
|
||||||
3. Inspect `journalctl --user -u aman -f`.
|
3. Inspect `journalctl --user -u aman -f`.
|
||||||
4. Re-run Aman in the foreground with `aman run --config ~/.config/aman/config.json --verbose`.
|
4. Re-run Aman in the foreground with `aman run --config ~/.config/aman/config.json --verbose`.
|
||||||
|
|
||||||
|
See [`docs/runtime-recovery.md`](docs/runtime-recovery.md) for the failure IDs,
|
||||||
|
example output, and the common recovery branches behind this sequence.
|
||||||
|
|
||||||
|
## Diagnostics
|
||||||
|
|
||||||
|
- `aman doctor` is the fast, read-only preflight for config, X11 session,
|
||||||
|
audio runtime, input resolution, hotkey availability, injection backend
|
||||||
|
selection, and service prerequisites.
|
||||||
|
- `aman self-check` is the deeper, still read-only installed-system readiness
|
||||||
|
check. It includes every `doctor` check plus managed model cache, cache
|
||||||
|
writability, service unit/state, and startup readiness.
|
||||||
|
- The tray `Run Diagnostics` action runs the same deeper `self-check` path and
|
||||||
|
logs any non-`ok` results.
|
||||||
|
- Exit code `0` means every check finished as `ok` or `warn`. Exit code `2`
|
||||||
|
means at least one check finished as `fail`.
|
||||||
|
|
||||||
|
Example output:
|
||||||
|
|
||||||
|
```text
|
||||||
|
[OK] config.load: loaded config from /home/user/.config/aman/config.json
|
||||||
|
[WARN] model.cache: managed editor model is not cached at /home/user/.cache/aman/models/Qwen2.5-1.5B-Instruct-Q4_K_M.gguf | next_step: start Aman once on a networked connection so it can download the managed editor model, then rerun `aman self-check --config /home/user/.config/aman/config.json`
|
||||||
|
[FAIL] service.state: user service is installed but failed to start | next_step: inspect `journalctl --user -u aman -f` to see why aman.service is failing
|
||||||
|
overall: fail
|
||||||
|
```
|
||||||
|
|
||||||
## Runtime Dependencies
|
## Runtime Dependencies
|
||||||
|
|
||||||
- X11
|
- X11
|
||||||
|
|
@ -319,6 +344,8 @@ Service notes:
|
||||||
setup, support, or debugging.
|
setup, support, or debugging.
|
||||||
- Start recovery with `aman doctor`, then `aman self-check`, before inspecting
|
- Start recovery with `aman doctor`, then `aman self-check`, before inspecting
|
||||||
`systemctl --user status aman` and `journalctl --user -u aman -f`.
|
`systemctl --user status aman` and `journalctl --user -u aman -f`.
|
||||||
|
- See [`docs/runtime-recovery.md`](docs/runtime-recovery.md) for the expected
|
||||||
|
diagnostic IDs and next steps.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|
@ -354,6 +381,7 @@ make package
|
||||||
make package-portable
|
make package-portable
|
||||||
make package-deb
|
make package-deb
|
||||||
make package-arch
|
make package-arch
|
||||||
|
make runtime-check
|
||||||
make release-check
|
make release-check
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -398,6 +426,7 @@ make run
|
||||||
make run config.example.json
|
make run config.example.json
|
||||||
make doctor
|
make doctor
|
||||||
make self-check
|
make self-check
|
||||||
|
make runtime-check
|
||||||
make eval-models
|
make eval-models
|
||||||
make sync-default-model
|
make sync-default-model
|
||||||
make check-default-model
|
make check-default-model
|
||||||
|
|
|
||||||
|
|
@ -144,3 +144,6 @@ If installation succeeds but runtime behavior is wrong, use the supported recove
|
||||||
2. `aman self-check --config ~/.config/aman/config.json`
|
2. `aman self-check --config ~/.config/aman/config.json`
|
||||||
3. `journalctl --user -u aman -f`
|
3. `journalctl --user -u aman -f`
|
||||||
4. `aman run --config ~/.config/aman/config.json --verbose`
|
4. `aman run --config ~/.config/aman/config.json --verbose`
|
||||||
|
|
||||||
|
The failure IDs and example outputs for this flow are documented in
|
||||||
|
[`docs/runtime-recovery.md`](./runtime-recovery.md).
|
||||||
|
|
|
||||||
|
|
@ -7,6 +7,7 @@ GA signoff bar. The GA signoff sections are required for `v1.0.0` and later.
|
||||||
2. Bump `project.version` in `pyproject.toml`.
|
2. Bump `project.version` in `pyproject.toml`.
|
||||||
3. Run quality and build gates:
|
3. Run quality and build gates:
|
||||||
- `make release-check`
|
- `make release-check`
|
||||||
|
- `make runtime-check`
|
||||||
- `make check-default-model`
|
- `make check-default-model`
|
||||||
4. Ensure model promotion artifacts are current:
|
4. Ensure model promotion artifacts are current:
|
||||||
- `benchmarks/results/latest.json` has the latest `winner_recommendation.name`
|
- `benchmarks/results/latest.json` has the latest `winner_recommendation.name`
|
||||||
|
|
@ -34,7 +35,11 @@ GA signoff bar. The GA signoff sections are required for `v1.0.0` and later.
|
||||||
- The support matrix names X11, runtime dependency ownership, `systemd --user`, and the representative distro families.
|
- The support matrix names X11, runtime dependency ownership, `systemd --user`, and the representative distro families.
|
||||||
- Service mode is documented as the default daily-use path and `aman run` as the manual support/debug path.
|
- Service mode is documented as the default daily-use path and `aman run` as the manual support/debug path.
|
||||||
- The recovery sequence `aman doctor` -> `aman self-check` -> `journalctl --user -u aman` -> `aman run --verbose` is documented consistently.
|
- The recovery sequence `aman doctor` -> `aman self-check` -> `journalctl --user -u aman` -> `aman run --verbose` is documented consistently.
|
||||||
11. GA validation signoff (`v1.0.0` and later):
|
11. GA runtime reliability signoff (`v1.0.0` and later):
|
||||||
|
- `make runtime-check` passes.
|
||||||
|
- [`docs/runtime-recovery.md`](./runtime-recovery.md) matches the shipped diagnostic IDs and next-step wording.
|
||||||
|
- [`docs/x11-ga/runtime-validation-report.md`](./x11-ga/runtime-validation-report.md) contains current automated evidence and release-specific manual validation entries.
|
||||||
|
12. GA validation signoff (`v1.0.0` and later):
|
||||||
- Validation evidence exists for Debian/Ubuntu, Arch, Fedora, and openSUSE.
|
- Validation evidence exists for Debian/Ubuntu, Arch, Fedora, and openSUSE.
|
||||||
- The portable installer, upgrade path, and uninstall path are validated.
|
- The portable installer, upgrade path, and uninstall path are validated.
|
||||||
- End-user docs and release notes match the shipped artifact set.
|
- End-user docs and release notes match the shipped artifact set.
|
||||||
|
|
|
||||||
48
docs/runtime-recovery.md
Normal file
48
docs/runtime-recovery.md
Normal file
|
|
@ -0,0 +1,48 @@
|
||||||
|
# Runtime Recovery Guide
|
||||||
|
|
||||||
|
Use this guide when Aman is installed but not behaving correctly.
|
||||||
|
|
||||||
|
## Command roles
|
||||||
|
|
||||||
|
- `aman doctor --config ~/.config/aman/config.json` is the fast, read-only preflight for config, X11 session, audio runtime, input device resolution, hotkey availability, injection backend selection, and service prerequisites.
|
||||||
|
- `aman self-check --config ~/.config/aman/config.json` is the deeper, still read-only readiness check. It includes every `doctor` check plus the managed model cache, cache writability, installed user service, current service state, and startup readiness.
|
||||||
|
- Tray `Run Diagnostics` uses the same deeper `self-check` path and logs any non-`ok` results.
|
||||||
|
|
||||||
|
## Reading the output
|
||||||
|
|
||||||
|
- `ok`: the checked surface is ready.
|
||||||
|
- `warn`: the checked surface is degraded or incomplete, but the command still exits `0`.
|
||||||
|
- `fail`: the supported path is blocked, and the command exits `2`.
|
||||||
|
|
||||||
|
Example output:
|
||||||
|
|
||||||
|
```text
|
||||||
|
[OK] config.load: loaded config from /home/user/.config/aman/config.json
|
||||||
|
[WARN] model.cache: managed editor model is not cached at /home/user/.cache/aman/models/Qwen2.5-1.5B-Instruct-Q4_K_M.gguf | next_step: start Aman once on a networked connection so it can download the managed editor model, then rerun `aman self-check --config /home/user/.config/aman/config.json`
|
||||||
|
[FAIL] service.state: user service is installed but failed to start | next_step: inspect `journalctl --user -u aman -f` to see why aman.service is failing
|
||||||
|
overall: fail
|
||||||
|
```
|
||||||
|
|
||||||
|
## Failure map
|
||||||
|
|
||||||
|
| Symptom | First command | Diagnostic ID | Meaning | Next step |
|
||||||
|
| --- | --- | --- | --- | --- |
|
||||||
|
| Config missing or invalid | `aman doctor` | `config.load` | Config is absent or cannot be parsed | Save settings, fix the JSON, or rerun `aman init --force`, then rerun `doctor` |
|
||||||
|
| No X11 session | `aman doctor` | `session.x11` | `DISPLAY` is missing or Wayland was detected | Start Aman from the same X11 user session you expect to use daily |
|
||||||
|
| Audio runtime or microphone missing | `aman doctor` | `runtime.audio` or `audio.input` | PortAudio or the selected input device is unavailable | Install runtime dependencies, connect a microphone, or choose a valid `recording.input` |
|
||||||
|
| Hotkey cannot be registered | `aman doctor` | `hotkey.parse` | The configured hotkey is invalid or already taken | Choose a different hotkey in Settings |
|
||||||
|
| Output injection fails | `aman doctor` | `injection.backend` | The chosen X11 output path is not usable | Switch to a supported backend or rerun in the foreground with `--verbose` |
|
||||||
|
| Managed editor model missing or corrupt | `aman self-check` | `model.cache` | The managed model is absent or has a bad checksum | Start Aman once on a networked connection, or clear the broken cache and retry |
|
||||||
|
| Model cache directory is not writable | `aman self-check` | `cache.writable` | Aman cannot create or update its managed model cache | Fix permissions on `~/.cache/aman/models/` |
|
||||||
|
| User service missing or disabled | `aman self-check` | `service.unit` or `service.state` | The service was not installed cleanly or is not active | Reinstall Aman or run `systemctl --user enable --now aman` |
|
||||||
|
| Startup still fails after install | `aman self-check` | `startup.readiness` | Aman can load config but cannot assemble its runtime without failing | Fix the named runtime dependency, custom model path, or editor dependency, then rerun `self-check` |
|
||||||
|
|
||||||
|
## Escalation order
|
||||||
|
|
||||||
|
1. Run `aman doctor --config ~/.config/aman/config.json`.
|
||||||
|
2. Run `aman self-check --config ~/.config/aman/config.json`.
|
||||||
|
3. Inspect `journalctl --user -u aman -f`.
|
||||||
|
4. Re-run Aman in the foreground with `aman run --config ~/.config/aman/config.json --verbose`.
|
||||||
|
|
||||||
|
If you are collecting evidence for a release or support handoff, copy the first
|
||||||
|
non-`ok` diagnostic line and the first matching `journalctl` failure block.
|
||||||
|
|
@ -16,7 +16,7 @@ Once Aman is installed, the next GA risk is not feature depth. It is whether the
|
||||||
- Define `aman doctor` as the fast preflight check for config, runtime dependencies, hotkey validity, audio device resolution, and service prerequisites.
|
- Define `aman doctor` as the fast preflight check for config, runtime dependencies, hotkey validity, audio device resolution, and service prerequisites.
|
||||||
- Define `aman self-check` as the deeper installed-system readiness check, including managed model availability, writable cache locations, and end-to-end startup prerequisites.
|
- Define `aman self-check` as the deeper installed-system readiness check, including managed model availability, writable cache locations, and end-to-end startup prerequisites.
|
||||||
- Make diagnostics return actionable messages with one next step, not generic failures.
|
- Make diagnostics return actionable messages with one next step, not generic failures.
|
||||||
- Standardize startup and runtime error wording across CLI output, service logs, tray notifications, and docs.
|
- Standardize startup and runtime error wording across CLI output, service logs, tray-triggered diagnostics, and docs.
|
||||||
- Cover recovery paths for:
|
- Cover recovery paths for:
|
||||||
- broken config
|
- broken config
|
||||||
- missing audio device
|
- missing audio device
|
||||||
|
|
@ -57,7 +57,7 @@ Once Aman is installed, the next GA risk is not feature depth. It is whether the
|
||||||
|
|
||||||
## Evidence required to close
|
## Evidence required to close
|
||||||
|
|
||||||
- Updated command help and docs for `doctor` and `self-check`.
|
- Updated command help and docs for `doctor` and `self-check`, including a public runtime recovery guide.
|
||||||
- Diagnostic output examples for success, warning, and failure cases.
|
- Diagnostic output examples for success, warning, and failure cases.
|
||||||
- A release validation report covering restart, offline-start, and representative recovery scenarios.
|
- A release validation report covering restart, offline-start, and representative recovery scenarios.
|
||||||
- Manual support runbooks that use diagnostics first and verbose foreground mode second.
|
- Manual support runbooks that use diagnostics first and verbose foreground mode second.
|
||||||
|
|
|
||||||
|
|
@ -6,14 +6,13 @@ Aman is not starting from zero. It already has a working X11 daemon, a settings-
|
||||||
|
|
||||||
The current gaps are:
|
The current gaps are:
|
||||||
|
|
||||||
- No single distro-agnostic end-user install, update, and uninstall path. The repo documents a Debian package path and partial Arch support, but not one canonical path for X11 users on Fedora, openSUSE, or other mainstream distros.
|
- The canonical portable install, update, and uninstall path now exists, but the representative distro rows still need real manual validation evidence before it can count as a GA-ready channel.
|
||||||
- No explicit support contract for "X11 users on any distro." The current docs describe target personas and a package-first approach, but they do not define the exact environment that GA will support.
|
- The X11 support contract and service-versus-foreground split are now documented, but the public release surface still needs the remaining trust and support work from milestones 4 and 5.
|
||||||
- No clear split between service mode and foreground/manual mode. The docs describe enabling a user service and also tell users to run `aman run`, which leaves the default lifecycle ambiguous.
|
- Validation matrices now exist for portable lifecycle and runtime reliability, but they are not yet filled with release-specific manual evidence across Debian/Ubuntu, Arch, Fedora, and openSUSE.
|
||||||
- No representative distro validation matrix. There is no evidence standard that says which distros must pass install, first run, update, restart, and uninstall checks before release.
|
|
||||||
- Incomplete trust surface. The project still needs a real license file, real maintainer/contact metadata, real project URLs, published release artifacts, and public checksums.
|
- Incomplete trust surface. The project still needs a real license file, real maintainer/contact metadata, real project URLs, published release artifacts, and public checksums.
|
||||||
- Incomplete first-run story. The product describes a settings window and tray workflow, but there is no short happy path, no expected-result walkthrough, and no visual proof that the experience is real.
|
- Incomplete first-run story. The product describes a settings window and tray workflow, but there is no short happy path, no expected-result walkthrough, and no visual proof that the experience is real.
|
||||||
- Diagnostics exist, but they are not yet the canonical recovery path for end users. `doctor` and `self-check` are present, but the docs do not yet teach users to rely on them first.
|
- Diagnostics are now the canonical recovery path, but milestone 3 still needs release-specific X11 evidence for restart, offline-start, tray diagnostics, and recovery scenarios.
|
||||||
- Release process exists, but not yet as a GA signoff system. The current release checklist is a good base, but it does not yet enforce the broader validation and support evidence required for a public 1.0 release.
|
- The release checklist now includes GA signoff gates, but the project is still short of the broader legal, release-publication, and validation evidence needed for a credible public 1.0 release.
|
||||||
|
|
||||||
## GA target
|
## GA target
|
||||||
|
|
||||||
|
|
@ -93,7 +92,13 @@ Any future docs, tray copy, and release notes should point users to this same se
|
||||||
[`portable-validation-matrix.md`](./portable-validation-matrix.md) are filled
|
[`portable-validation-matrix.md`](./portable-validation-matrix.md) are filled
|
||||||
with real manual validation evidence.
|
with real manual validation evidence.
|
||||||
- [ ] [Milestone 3: Runtime Reliability and Diagnostics](./03-runtime-reliability-and-diagnostics.md)
|
- [ ] [Milestone 3: Runtime Reliability and Diagnostics](./03-runtime-reliability-and-diagnostics.md)
|
||||||
Make startup, failure handling, and recovery predictable.
|
Implementation landed on 2026-03-12: `doctor` and `self-check` now have
|
||||||
|
distinct read-only roles, runtime failures log stable IDs plus next steps,
|
||||||
|
`make runtime-check` is part of the release surface, and the runtime recovery
|
||||||
|
guide plus validation report now exist. Leave this milestone open until the
|
||||||
|
release-specific manual rows in
|
||||||
|
[`runtime-validation-report.md`](./runtime-validation-report.md) are filled
|
||||||
|
with real X11 validation evidence.
|
||||||
- [ ] [Milestone 4: First-Run UX and Support Docs](./04-first-run-ux-and-support-docs.md)
|
- [ ] [Milestone 4: First-Run UX and Support Docs](./04-first-run-ux-and-support-docs.md)
|
||||||
Turn the product from "documented by the author" into "understandable by a new user."
|
Turn the product from "documented by the author" into "understandable by a new user."
|
||||||
- [ ] [Milestone 5: GA Candidate Validation and Release](./05-ga-candidate-validation-and-release.md)
|
- [ ] [Milestone 5: GA Candidate Validation and Release](./05-ga-candidate-validation-and-release.md)
|
||||||
|
|
|
||||||
44
docs/x11-ga/runtime-validation-report.md
Normal file
44
docs/x11-ga/runtime-validation-report.md
Normal file
|
|
@ -0,0 +1,44 @@
|
||||||
|
# Runtime Validation Report
|
||||||
|
|
||||||
|
This document tracks milestone 3 evidence for runtime reliability and
|
||||||
|
diagnostics.
|
||||||
|
|
||||||
|
## Automated evidence
|
||||||
|
|
||||||
|
Completed on 2026-03-12:
|
||||||
|
|
||||||
|
- `PYTHONPATH=src python3 -m unittest tests.test_diagnostics tests.test_aman_cli tests.test_aman tests.test_aiprocess`
|
||||||
|
- covers `doctor` versus `self-check`, tri-state diagnostic output, warning
|
||||||
|
versus failure exit codes, read-only model cache probing, and actionable
|
||||||
|
runtime log wording for audio, hotkey, injection, editor, and startup
|
||||||
|
failures
|
||||||
|
- `PYTHONPATH=src python3 -m unittest discover -s tests -p 'test_*.py'`
|
||||||
|
- confirms the runtime and diagnostics changes do not regress the broader
|
||||||
|
daemon, CLI, config, and portable bundle flows
|
||||||
|
- `python3 -m py_compile src/*.py tests/*.py`
|
||||||
|
- verifies the updated runtime and diagnostics modules compile cleanly
|
||||||
|
|
||||||
|
## Automated scenario coverage
|
||||||
|
|
||||||
|
| Scenario | Evidence | Status | Notes |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| `doctor` and `self-check` have distinct roles | `tests.test_diagnostics`, `tests.test_aman_cli` | Complete | `self-check` extends `doctor` with service/model/startup readiness checks |
|
||||||
|
| Missing config remains read-only | `tests.test_diagnostics` | Complete | Missing config yields `warn` and does not write a default file |
|
||||||
|
| Managed model cache probing is read-only | `tests.test_diagnostics`, `tests.test_aiprocess` | Complete | `self-check` uses cache probing and does not download or repair |
|
||||||
|
| Warning-only diagnostics exit `0`; failures exit `2` | `tests.test_aman_cli` | Complete | Human and JSON output share the same status model |
|
||||||
|
| Runtime failures log stable IDs and one next step | `tests.test_aman_cli`, `tests.test_aman` | Complete | Covers hotkey, audio-input, injection, editor, and startup failure wording |
|
||||||
|
| Repeated start/stop and shutdown return to `idle` | `tests.test_aman` | Complete | Current daemon tests cover start, stop, cancel, pause, and shutdown paths |
|
||||||
|
|
||||||
|
## Manual X11 validation
|
||||||
|
|
||||||
|
These rows must be filled with release-specific evidence before milestone 3 can
|
||||||
|
be closed as complete for GA signoff.
|
||||||
|
|
||||||
|
| Scenario | Debian/Ubuntu | Arch | Fedora | openSUSE | Reviewer | Status | Notes |
|
||||||
|
| --- | --- | --- | --- | --- | --- | --- | --- |
|
||||||
|
| Service restart after a successful install | Pending | Pending | Pending | Pending | Pending | Pending | Verify `systemctl --user restart aman` returns to the tray/ready state |
|
||||||
|
| Reboot followed by successful reuse | Pending | Pending | Pending | Pending | Pending | Pending | Validate recovery after a real session restart |
|
||||||
|
| Offline startup with an already-cached model | Pending | Pending | Pending | Pending | Pending | Pending | Disable network, then confirm the cached path still starts |
|
||||||
|
| Missing runtime dependency recovery | Pending | Pending | Pending | Pending | Pending | Pending | Remove one documented dependency, verify diagnostics point to the correct fix |
|
||||||
|
| Tray-triggered diagnostics logging | Pending | Pending | Pending | Pending | Pending | Pending | Use `Run Diagnostics` and confirm the same IDs/messages appear in logs |
|
||||||
|
| Service-failure escalation path | Pending | Pending | Pending | Pending | Pending | Pending | Confirm `doctor` -> `self-check` -> `journalctl` -> `aman run --verbose` is enough to explain the failure |
|
||||||
|
|
@ -34,6 +34,13 @@ class ProcessTimings:
|
||||||
total_ms: float
|
total_ms: float
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class ManagedModelStatus:
|
||||||
|
status: str
|
||||||
|
path: Path
|
||||||
|
message: str
|
||||||
|
|
||||||
|
|
||||||
_EXAMPLE_CASES = [
|
_EXAMPLE_CASES = [
|
||||||
{
|
{
|
||||||
"id": "corr-time-01",
|
"id": "corr-time-01",
|
||||||
|
|
@ -748,6 +755,32 @@ def ensure_model():
|
||||||
return MODEL_PATH
|
return MODEL_PATH
|
||||||
|
|
||||||
|
|
||||||
|
def probe_managed_model() -> ManagedModelStatus:
|
||||||
|
if not MODEL_PATH.exists():
|
||||||
|
return ManagedModelStatus(
|
||||||
|
status="missing",
|
||||||
|
path=MODEL_PATH,
|
||||||
|
message=f"managed editor model is not cached at {MODEL_PATH}",
|
||||||
|
)
|
||||||
|
|
||||||
|
checksum = _sha256_file(MODEL_PATH)
|
||||||
|
if checksum.casefold() != MODEL_SHA256.casefold():
|
||||||
|
return ManagedModelStatus(
|
||||||
|
status="invalid",
|
||||||
|
path=MODEL_PATH,
|
||||||
|
message=(
|
||||||
|
"managed editor model checksum mismatch "
|
||||||
|
f"(expected {MODEL_SHA256}, got {checksum})"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
return ManagedModelStatus(
|
||||||
|
status="ready",
|
||||||
|
path=MODEL_PATH,
|
||||||
|
message=f"managed editor model is ready at {MODEL_PATH}",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _assert_expected_model_checksum(checksum: str) -> None:
|
def _assert_expected_model_checksum(checksum: str) -> None:
|
||||||
if checksum.casefold() == MODEL_SHA256.casefold():
|
if checksum.casefold() == MODEL_SHA256.casefold():
|
||||||
return
|
return
|
||||||
|
|
|
||||||
225
src/aman.py
225
src/aman.py
|
|
@ -23,7 +23,16 @@ from config import Config, ConfigValidationError, load, redacted_dict, save, val
|
||||||
from constants import DEFAULT_CONFIG_PATH, MODEL_PATH, RECORD_TIMEOUT_SEC
|
from constants import DEFAULT_CONFIG_PATH, MODEL_PATH, RECORD_TIMEOUT_SEC
|
||||||
from config_ui import ConfigUiResult, run_config_ui, show_about_dialog, show_help_dialog
|
from config_ui import ConfigUiResult, run_config_ui, show_about_dialog, show_help_dialog
|
||||||
from desktop import get_desktop_adapter
|
from desktop import get_desktop_adapter
|
||||||
from diagnostics import run_diagnostics
|
from diagnostics import (
|
||||||
|
doctor_command,
|
||||||
|
format_diagnostic_line,
|
||||||
|
format_support_line,
|
||||||
|
journalctl_command,
|
||||||
|
run_doctor,
|
||||||
|
run_self_check,
|
||||||
|
self_check_command,
|
||||||
|
verbose_run_command,
|
||||||
|
)
|
||||||
from engine.pipeline import PipelineEngine
|
from engine.pipeline import PipelineEngine
|
||||||
from model_eval import (
|
from model_eval import (
|
||||||
build_heuristic_dataset,
|
build_heuristic_dataset,
|
||||||
|
|
@ -286,10 +295,18 @@ def _summarize_bench_runs(runs: list[BenchRunMetrics]) -> BenchSummary:
|
||||||
|
|
||||||
|
|
||||||
class Daemon:
|
class Daemon:
|
||||||
def __init__(self, cfg: Config, desktop, *, verbose: bool = False):
|
def __init__(
|
||||||
|
self,
|
||||||
|
cfg: Config,
|
||||||
|
desktop,
|
||||||
|
*,
|
||||||
|
verbose: bool = False,
|
||||||
|
config_path: Path | None = None,
|
||||||
|
):
|
||||||
self.cfg = cfg
|
self.cfg = cfg
|
||||||
self.desktop = desktop
|
self.desktop = desktop
|
||||||
self.verbose = verbose
|
self.verbose = verbose
|
||||||
|
self.config_path = config_path or DEFAULT_CONFIG_PATH
|
||||||
self.lock = threading.Lock()
|
self.lock = threading.Lock()
|
||||||
self._shutdown_requested = threading.Event()
|
self._shutdown_requested = threading.Event()
|
||||||
self._paused = False
|
self._paused = False
|
||||||
|
|
@ -447,7 +464,12 @@ class Daemon:
|
||||||
try:
|
try:
|
||||||
stream, record = start_audio_recording(self.cfg.recording.input)
|
stream, record = start_audio_recording(self.cfg.recording.input)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("record start failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"audio.input",
|
||||||
|
f"record start failed: {exc}",
|
||||||
|
next_step=f"run `{doctor_command(self.config_path)}` and verify the selected input device",
|
||||||
|
)
|
||||||
return
|
return
|
||||||
if not self._arm_cancel_listener():
|
if not self._arm_cancel_listener():
|
||||||
try:
|
try:
|
||||||
|
|
@ -509,7 +531,12 @@ class Daemon:
|
||||||
try:
|
try:
|
||||||
audio = stop_audio_recording(stream, record)
|
audio = stop_audio_recording(stream, record)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("record stop failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"runtime.audio",
|
||||||
|
f"record stop failed: {exc}",
|
||||||
|
next_step=f"rerun `{doctor_command(self.config_path)}` and verify the audio runtime",
|
||||||
|
)
|
||||||
self.set_state(State.IDLE)
|
self.set_state(State.IDLE)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
@ -518,7 +545,12 @@ class Daemon:
|
||||||
return
|
return
|
||||||
|
|
||||||
if audio.size == 0:
|
if audio.size == 0:
|
||||||
logging.error("no audio captured")
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"runtime.audio",
|
||||||
|
"no audio was captured from the active input device",
|
||||||
|
next_step="verify the selected microphone level and rerun diagnostics",
|
||||||
|
)
|
||||||
self.set_state(State.IDLE)
|
self.set_state(State.IDLE)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
@ -526,7 +558,12 @@ class Daemon:
|
||||||
logging.info("stt started")
|
logging.info("stt started")
|
||||||
asr_result = self._transcribe_with_metrics(audio)
|
asr_result = self._transcribe_with_metrics(audio)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("stt failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"startup.readiness",
|
||||||
|
f"stt failed: {exc}",
|
||||||
|
next_step=f"run `{self_check_command(self.config_path)}` and then `{verbose_run_command(self.config_path)}`",
|
||||||
|
)
|
||||||
self.set_state(State.IDLE)
|
self.set_state(State.IDLE)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
@ -555,7 +592,12 @@ class Daemon:
|
||||||
verbose=self.log_transcript,
|
verbose=self.log_transcript,
|
||||||
)
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("editor stage failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"model.cache",
|
||||||
|
f"editor stage failed: {exc}",
|
||||||
|
next_step=f"run `{self_check_command(self.config_path)}` and inspect `{journalctl_command()}` if the service keeps failing",
|
||||||
|
)
|
||||||
self.set_state(State.IDLE)
|
self.set_state(State.IDLE)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
@ -580,7 +622,12 @@ class Daemon:
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("output failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"injection.backend",
|
||||||
|
f"output failed: {exc}",
|
||||||
|
next_step=f"run `{doctor_command(self.config_path)}` and then `{verbose_run_command(self.config_path)}`",
|
||||||
|
)
|
||||||
finally:
|
finally:
|
||||||
self.set_state(State.IDLE)
|
self.set_state(State.IDLE)
|
||||||
|
|
||||||
|
|
@ -964,8 +1011,8 @@ def _build_parser() -> argparse.ArgumentParser:
|
||||||
|
|
||||||
doctor_parser = subparsers.add_parser(
|
doctor_parser = subparsers.add_parser(
|
||||||
"doctor",
|
"doctor",
|
||||||
help="run preflight diagnostics for config and local environment",
|
help="run fast preflight diagnostics for config and local environment",
|
||||||
description="Run preflight diagnostics for config and the local environment.",
|
description="Run fast preflight diagnostics for config and the local environment.",
|
||||||
)
|
)
|
||||||
doctor_parser.add_argument("--config", default="", help="path to config.json")
|
doctor_parser.add_argument("--config", default="", help="path to config.json")
|
||||||
doctor_parser.add_argument("--json", action="store_true", help="print JSON output")
|
doctor_parser.add_argument("--json", action="store_true", help="print JSON output")
|
||||||
|
|
@ -973,8 +1020,8 @@ def _build_parser() -> argparse.ArgumentParser:
|
||||||
|
|
||||||
self_check_parser = subparsers.add_parser(
|
self_check_parser = subparsers.add_parser(
|
||||||
"self-check",
|
"self-check",
|
||||||
help="run installed-system readiness diagnostics",
|
help="run deeper installed-system readiness diagnostics without modifying local state",
|
||||||
description="Run installed-system readiness diagnostics.",
|
description="Run deeper installed-system readiness diagnostics without modifying local state.",
|
||||||
)
|
)
|
||||||
self_check_parser.add_argument("--config", default="", help="path to config.json")
|
self_check_parser.add_argument("--config", default="", help="path to config.json")
|
||||||
self_check_parser.add_argument("--json", action="store_true", help="print JSON output")
|
self_check_parser.add_argument("--json", action="store_true", help="print JSON output")
|
||||||
|
|
@ -1095,21 +1142,38 @@ def _configure_logging(verbose: bool) -> None:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def _doctor_command(args: argparse.Namespace) -> int:
|
def _log_support_issue(
|
||||||
report = run_diagnostics(args.config)
|
level: int,
|
||||||
|
issue_id: str,
|
||||||
|
message: str,
|
||||||
|
*,
|
||||||
|
next_step: str = "",
|
||||||
|
) -> None:
|
||||||
|
logging.log(level, format_support_line(issue_id, message, next_step=next_step))
|
||||||
|
|
||||||
|
|
||||||
|
def _diagnostic_command(
|
||||||
|
args: argparse.Namespace,
|
||||||
|
runner,
|
||||||
|
) -> int:
|
||||||
|
report = runner(args.config)
|
||||||
if args.json:
|
if args.json:
|
||||||
print(report.to_json())
|
print(report.to_json())
|
||||||
else:
|
else:
|
||||||
for check in report.checks:
|
for check in report.checks:
|
||||||
status = "OK" if check.ok else "FAIL"
|
print(format_diagnostic_line(check))
|
||||||
line = f"[{status}] {check.id}: {check.message}"
|
print(f"overall: {report.status}")
|
||||||
if check.hint:
|
|
||||||
line = f"{line} | hint: {check.hint}"
|
|
||||||
print(line)
|
|
||||||
print(f"overall: {'ok' if report.ok else 'failed'}")
|
|
||||||
return 0 if report.ok else 2
|
return 0 if report.ok else 2
|
||||||
|
|
||||||
|
|
||||||
|
def _doctor_command(args: argparse.Namespace) -> int:
|
||||||
|
return _diagnostic_command(args, run_doctor)
|
||||||
|
|
||||||
|
|
||||||
|
def _self_check_command(args: argparse.Namespace) -> int:
|
||||||
|
return _diagnostic_command(args, run_self_check)
|
||||||
|
|
||||||
|
|
||||||
def _read_bench_input_text(args: argparse.Namespace) -> str:
|
def _read_bench_input_text(args: argparse.Namespace) -> str:
|
||||||
if args.text_file:
|
if args.text_file:
|
||||||
try:
|
try:
|
||||||
|
|
@ -1413,7 +1477,12 @@ def _run_command(args: argparse.Namespace) -> int:
|
||||||
try:
|
try:
|
||||||
desktop = get_desktop_adapter()
|
desktop = get_desktop_adapter()
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("startup failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"session.x11",
|
||||||
|
f"startup failed: {exc}",
|
||||||
|
next_step="log into an X11 session and rerun Aman",
|
||||||
|
)
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
if not config_existed_before_start:
|
if not config_existed_before_start:
|
||||||
|
|
@ -1424,23 +1493,43 @@ def _run_command(args: argparse.Namespace) -> int:
|
||||||
try:
|
try:
|
||||||
cfg = _load_runtime_config(config_path)
|
cfg = _load_runtime_config(config_path)
|
||||||
except ConfigValidationError as exc:
|
except ConfigValidationError as exc:
|
||||||
logging.error("startup failed: invalid config field '%s': %s", exc.field, exc.reason)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"config.load",
|
||||||
|
f"startup failed: invalid config field '{exc.field}': {exc.reason}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` after fixing the config",
|
||||||
|
)
|
||||||
if exc.example_fix:
|
if exc.example_fix:
|
||||||
logging.error("example fix: %s", exc.example_fix)
|
logging.error("example fix: %s", exc.example_fix)
|
||||||
return 1
|
return 1
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("startup failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"config.load",
|
||||||
|
f"startup failed: {exc}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` to inspect config readiness",
|
||||||
|
)
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
try:
|
try:
|
||||||
validate(cfg)
|
validate(cfg)
|
||||||
except ConfigValidationError as exc:
|
except ConfigValidationError as exc:
|
||||||
logging.error("startup failed: invalid config field '%s': %s", exc.field, exc.reason)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"config.load",
|
||||||
|
f"startup failed: invalid config field '{exc.field}': {exc.reason}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` after fixing the config",
|
||||||
|
)
|
||||||
if exc.example_fix:
|
if exc.example_fix:
|
||||||
logging.error("example fix: %s", exc.example_fix)
|
logging.error("example fix: %s", exc.example_fix)
|
||||||
return 1
|
return 1
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("startup failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"config.load",
|
||||||
|
f"startup failed: {exc}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` to inspect config readiness",
|
||||||
|
)
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
logging.info("hotkey: %s", cfg.daemon.hotkey)
|
logging.info("hotkey: %s", cfg.daemon.hotkey)
|
||||||
|
|
@ -1463,9 +1552,14 @@ def _run_command(args: argparse.Namespace) -> int:
|
||||||
logging.info("editor backend: local_llama_builtin (%s)", MODEL_PATH)
|
logging.info("editor backend: local_llama_builtin (%s)", MODEL_PATH)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
daemon = Daemon(cfg, desktop, verbose=args.verbose)
|
daemon = Daemon(cfg, desktop, verbose=args.verbose, config_path=config_path)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("startup failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"startup.readiness",
|
||||||
|
f"startup failed: {exc}",
|
||||||
|
next_step=f"run `{self_check_command(config_path)}` and inspect `{journalctl_command()}` if the service still fails",
|
||||||
|
)
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
shutdown_once = threading.Event()
|
shutdown_once = threading.Event()
|
||||||
|
|
@ -1500,22 +1594,42 @@ def _run_command(args: argparse.Namespace) -> int:
|
||||||
try:
|
try:
|
||||||
new_cfg = load(str(config_path))
|
new_cfg = load(str(config_path))
|
||||||
except ConfigValidationError as exc:
|
except ConfigValidationError as exc:
|
||||||
logging.error("reload failed: invalid config field '%s': %s", exc.field, exc.reason)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"config.load",
|
||||||
|
f"reload failed: invalid config field '{exc.field}': {exc.reason}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` after fixing the config",
|
||||||
|
)
|
||||||
if exc.example_fix:
|
if exc.example_fix:
|
||||||
logging.error("reload example fix: %s", exc.example_fix)
|
logging.error("reload example fix: %s", exc.example_fix)
|
||||||
return
|
return
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("reload failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"config.load",
|
||||||
|
f"reload failed: {exc}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` to inspect config readiness",
|
||||||
|
)
|
||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
desktop.start_hotkey_listener(new_cfg.daemon.hotkey, hotkey_callback)
|
desktop.start_hotkey_listener(new_cfg.daemon.hotkey, hotkey_callback)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("reload failed: could not apply hotkey '%s': %s", new_cfg.daemon.hotkey, exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"hotkey.parse",
|
||||||
|
f"reload failed: could not apply hotkey '{new_cfg.daemon.hotkey}': {exc}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` and choose a different hotkey in Settings",
|
||||||
|
)
|
||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
daemon.apply_config(new_cfg)
|
daemon.apply_config(new_cfg)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("reload failed: could not apply runtime engines: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"startup.readiness",
|
||||||
|
f"reload failed: could not apply runtime engines: {exc}",
|
||||||
|
next_step=f"run `{self_check_command(config_path)}` and then `{verbose_run_command(config_path)}`",
|
||||||
|
)
|
||||||
return
|
return
|
||||||
cfg = new_cfg
|
cfg = new_cfg
|
||||||
logging.info("config reloaded from %s", config_path)
|
logging.info("config reloaded from %s", config_path)
|
||||||
|
|
@ -1538,33 +1652,45 @@ def _run_command(args: argparse.Namespace) -> int:
|
||||||
save(config_path, result.config)
|
save(config_path, result.config)
|
||||||
desktop.start_hotkey_listener(result.config.daemon.hotkey, hotkey_callback)
|
desktop.start_hotkey_listener(result.config.daemon.hotkey, hotkey_callback)
|
||||||
except ConfigValidationError as exc:
|
except ConfigValidationError as exc:
|
||||||
logging.error("settings apply failed: invalid config field '%s': %s", exc.field, exc.reason)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"config.load",
|
||||||
|
f"settings apply failed: invalid config field '{exc.field}': {exc.reason}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` after fixing the config",
|
||||||
|
)
|
||||||
if exc.example_fix:
|
if exc.example_fix:
|
||||||
logging.error("settings example fix: %s", exc.example_fix)
|
logging.error("settings example fix: %s", exc.example_fix)
|
||||||
return
|
return
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("settings apply failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"hotkey.parse",
|
||||||
|
f"settings apply failed: {exc}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` and check the configured hotkey",
|
||||||
|
)
|
||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
daemon.apply_config(result.config)
|
daemon.apply_config(result.config)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("settings apply failed: could not apply runtime engines: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"startup.readiness",
|
||||||
|
f"settings apply failed: could not apply runtime engines: {exc}",
|
||||||
|
next_step=f"run `{self_check_command(config_path)}` and then `{verbose_run_command(config_path)}`",
|
||||||
|
)
|
||||||
return
|
return
|
||||||
cfg = result.config
|
cfg = result.config
|
||||||
logging.info("settings applied from tray")
|
logging.info("settings applied from tray")
|
||||||
|
|
||||||
def run_diagnostics_callback():
|
def run_diagnostics_callback():
|
||||||
report = run_diagnostics(str(config_path))
|
report = run_self_check(str(config_path))
|
||||||
if report.ok:
|
if report.status == "ok":
|
||||||
logging.info("diagnostics passed (%d checks)", len(report.checks))
|
logging.info("diagnostics finished (%s, %d checks)", report.status, len(report.checks))
|
||||||
return
|
return
|
||||||
failed = [check for check in report.checks if not check.ok]
|
flagged = [check for check in report.checks if check.status != "ok"]
|
||||||
logging.warning("diagnostics failed (%d/%d checks)", len(failed), len(report.checks))
|
logging.warning("diagnostics finished (%s, %d/%d checks need attention)", report.status, len(flagged), len(report.checks))
|
||||||
for check in failed:
|
for check in flagged:
|
||||||
if check.hint:
|
logging.warning("%s", format_diagnostic_line(check))
|
||||||
logging.warning("%s: %s | hint: %s", check.id, check.message, check.hint)
|
|
||||||
else:
|
|
||||||
logging.warning("%s: %s", check.id, check.message)
|
|
||||||
|
|
||||||
def open_config_path_callback():
|
def open_config_path_callback():
|
||||||
logging.info("config path: %s", config_path)
|
logging.info("config path: %s", config_path)
|
||||||
|
|
@ -1575,7 +1701,12 @@ def _run_command(args: argparse.Namespace) -> int:
|
||||||
hotkey_callback,
|
hotkey_callback,
|
||||||
)
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logging.error("hotkey setup failed: %s", exc)
|
_log_support_issue(
|
||||||
|
logging.ERROR,
|
||||||
|
"hotkey.parse",
|
||||||
|
f"hotkey setup failed: {exc}",
|
||||||
|
next_step=f"run `{doctor_command(config_path)}` and choose a different hotkey if needed",
|
||||||
|
)
|
||||||
return 1
|
return 1
|
||||||
logging.info("ready")
|
logging.info("ready")
|
||||||
try:
|
try:
|
||||||
|
|
@ -1607,10 +1738,10 @@ def main(argv: list[str] | None = None) -> int:
|
||||||
return _run_command(args)
|
return _run_command(args)
|
||||||
if args.command == "doctor":
|
if args.command == "doctor":
|
||||||
_configure_logging(args.verbose)
|
_configure_logging(args.verbose)
|
||||||
return _doctor_command(args)
|
return _diagnostic_command(args, run_doctor)
|
||||||
if args.command == "self-check":
|
if args.command == "self-check":
|
||||||
_configure_logging(args.verbose)
|
_configure_logging(args.verbose)
|
||||||
return _doctor_command(args)
|
return _diagnostic_command(args, run_self_check)
|
||||||
if args.command == "bench":
|
if args.command == "bench":
|
||||||
_configure_logging(args.verbose)
|
_configure_logging(args.verbose)
|
||||||
return _bench_command(args)
|
return _bench_command(args)
|
||||||
|
|
|
||||||
|
|
@ -112,11 +112,10 @@ class Config:
|
||||||
vocabulary: VocabularyConfig = field(default_factory=VocabularyConfig)
|
vocabulary: VocabularyConfig = field(default_factory=VocabularyConfig)
|
||||||
|
|
||||||
|
|
||||||
def load(path: str | None) -> Config:
|
def _load_from_path(path: Path, *, create_default: bool) -> Config:
|
||||||
cfg = Config()
|
cfg = Config()
|
||||||
p = Path(path) if path else DEFAULT_CONFIG_PATH
|
if path.exists():
|
||||||
if p.exists():
|
data = json.loads(path.read_text(encoding="utf-8"))
|
||||||
data = json.loads(p.read_text(encoding="utf-8"))
|
|
||||||
if not isinstance(data, dict):
|
if not isinstance(data, dict):
|
||||||
_raise_cfg_error(
|
_raise_cfg_error(
|
||||||
"config",
|
"config",
|
||||||
|
|
@ -128,11 +127,24 @@ def load(path: str | None) -> Config:
|
||||||
validate(cfg)
|
validate(cfg)
|
||||||
return cfg
|
return cfg
|
||||||
|
|
||||||
|
if not create_default:
|
||||||
|
raise FileNotFoundError(str(path))
|
||||||
|
|
||||||
validate(cfg)
|
validate(cfg)
|
||||||
_write_default_config(p, cfg)
|
_write_default_config(path, cfg)
|
||||||
return cfg
|
return cfg
|
||||||
|
|
||||||
|
|
||||||
|
def load(path: str | None) -> Config:
|
||||||
|
target = Path(path) if path else DEFAULT_CONFIG_PATH
|
||||||
|
return _load_from_path(target, create_default=True)
|
||||||
|
|
||||||
|
|
||||||
|
def load_existing(path: str | None) -> Config:
|
||||||
|
target = Path(path) if path else DEFAULT_CONFIG_PATH
|
||||||
|
return _load_from_path(target, create_default=False)
|
||||||
|
|
||||||
|
|
||||||
def save(path: str | Path | None, cfg: Config) -> Path:
|
def save(path: str | Path | None, cfg: Config) -> Path:
|
||||||
validate(cfg)
|
validate(cfg)
|
||||||
target = Path(path) if path else DEFAULT_CONFIG_PATH
|
target = Path(path) if path else DEFAULT_CONFIG_PATH
|
||||||
|
|
|
||||||
|
|
@ -1,202 +1,630 @@
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
from dataclasses import asdict, dataclass
|
import os
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
from dataclasses import dataclass
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from aiprocess import ensure_model
|
from aiprocess import _load_llama_bindings, probe_managed_model
|
||||||
from config import Config, load
|
from config import Config, load_existing
|
||||||
|
from constants import DEFAULT_CONFIG_PATH, MODEL_DIR
|
||||||
from desktop import get_desktop_adapter
|
from desktop import get_desktop_adapter
|
||||||
from recorder import resolve_input_device
|
from recorder import list_input_devices, resolve_input_device
|
||||||
|
|
||||||
|
|
||||||
|
STATUS_OK = "ok"
|
||||||
|
STATUS_WARN = "warn"
|
||||||
|
STATUS_FAIL = "fail"
|
||||||
|
_VALID_STATUSES = {STATUS_OK, STATUS_WARN, STATUS_FAIL}
|
||||||
|
SERVICE_NAME = "aman"
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class DiagnosticCheck:
|
class DiagnosticCheck:
|
||||||
id: str
|
id: str
|
||||||
ok: bool
|
status: str
|
||||||
message: str
|
message: str
|
||||||
hint: str = ""
|
next_step: str = ""
|
||||||
|
|
||||||
|
def __post_init__(self) -> None:
|
||||||
|
if self.status not in _VALID_STATUSES:
|
||||||
|
raise ValueError(f"invalid diagnostic status: {self.status}")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def ok(self) -> bool:
|
||||||
|
return self.status != STATUS_FAIL
|
||||||
|
|
||||||
|
@property
|
||||||
|
def hint(self) -> str:
|
||||||
|
return self.next_step
|
||||||
|
|
||||||
|
def to_payload(self) -> dict[str, str | bool]:
|
||||||
|
return {
|
||||||
|
"id": self.id,
|
||||||
|
"status": self.status,
|
||||||
|
"ok": self.ok,
|
||||||
|
"message": self.message,
|
||||||
|
"next_step": self.next_step,
|
||||||
|
"hint": self.next_step,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class DiagnosticReport:
|
class DiagnosticReport:
|
||||||
checks: list[DiagnosticCheck]
|
checks: list[DiagnosticCheck]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def status(self) -> str:
|
||||||
|
if any(check.status == STATUS_FAIL for check in self.checks):
|
||||||
|
return STATUS_FAIL
|
||||||
|
if any(check.status == STATUS_WARN for check in self.checks):
|
||||||
|
return STATUS_WARN
|
||||||
|
return STATUS_OK
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def ok(self) -> bool:
|
def ok(self) -> bool:
|
||||||
return all(check.ok for check in self.checks)
|
return self.status != STATUS_FAIL
|
||||||
|
|
||||||
def to_json(self) -> str:
|
def to_json(self) -> str:
|
||||||
payload = {"ok": self.ok, "checks": [asdict(check) for check in self.checks]}
|
payload = {
|
||||||
|
"status": self.status,
|
||||||
|
"ok": self.ok,
|
||||||
|
"checks": [check.to_payload() for check in self.checks],
|
||||||
|
}
|
||||||
return json.dumps(payload, ensure_ascii=False, indent=2)
|
return json.dumps(payload, ensure_ascii=False, indent=2)
|
||||||
|
|
||||||
|
|
||||||
def run_diagnostics(config_path: str | None) -> DiagnosticReport:
|
@dataclass
|
||||||
checks: list[DiagnosticCheck] = []
|
class _ConfigLoadResult:
|
||||||
cfg: Config | None = None
|
check: DiagnosticCheck
|
||||||
|
cfg: Config | None
|
||||||
|
|
||||||
try:
|
|
||||||
cfg = load(config_path or "")
|
|
||||||
checks.append(
|
|
||||||
DiagnosticCheck(
|
|
||||||
id="config.load",
|
|
||||||
ok=True,
|
|
||||||
message=f"loaded config from {_resolved_config_path(config_path)}",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
except Exception as exc:
|
|
||||||
checks.append(
|
|
||||||
DiagnosticCheck(
|
|
||||||
id="config.load",
|
|
||||||
ok=False,
|
|
||||||
message=f"failed to load config: {exc}",
|
|
||||||
hint=(
|
|
||||||
"open Settings... from Aman tray to save a valid config, or run "
|
|
||||||
"`aman init --force` for automation"
|
|
||||||
),
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
checks.extend(_audio_check(cfg))
|
def doctor_command(config_path: str | Path | None = None) -> str:
|
||||||
checks.extend(_hotkey_check(cfg))
|
return f"aman doctor --config {_resolved_config_path(config_path)}"
|
||||||
checks.extend(_injection_backend_check(cfg))
|
|
||||||
checks.extend(_provider_check(cfg))
|
|
||||||
checks.extend(_model_check(cfg))
|
def self_check_command(config_path: str | Path | None = None) -> str:
|
||||||
|
return f"aman self-check --config {_resolved_config_path(config_path)}"
|
||||||
|
|
||||||
|
|
||||||
|
def run_command(config_path: str | Path | None = None) -> str:
|
||||||
|
return f"aman run --config {_resolved_config_path(config_path)}"
|
||||||
|
|
||||||
|
|
||||||
|
def verbose_run_command(config_path: str | Path | None = None) -> str:
|
||||||
|
return f"{run_command(config_path)} --verbose"
|
||||||
|
|
||||||
|
|
||||||
|
def journalctl_command() -> str:
|
||||||
|
return "journalctl --user -u aman -f"
|
||||||
|
|
||||||
|
|
||||||
|
def format_support_line(issue_id: str, message: str, *, next_step: str = "") -> str:
|
||||||
|
line = f"{issue_id}: {message}"
|
||||||
|
if next_step:
|
||||||
|
line = f"{line} | next_step: {next_step}"
|
||||||
|
return line
|
||||||
|
|
||||||
|
|
||||||
|
def format_diagnostic_line(check: DiagnosticCheck) -> str:
|
||||||
|
return f"[{check.status.upper()}] {format_support_line(check.id, check.message, next_step=check.next_step)}"
|
||||||
|
|
||||||
|
|
||||||
|
def run_doctor(config_path: str | None) -> DiagnosticReport:
|
||||||
|
resolved_path = _resolved_config_path(config_path)
|
||||||
|
config_result = _load_config_check(resolved_path)
|
||||||
|
session_check = _session_check()
|
||||||
|
runtime_audio_check, input_devices = _runtime_audio_check(resolved_path)
|
||||||
|
service_prereq = _service_prereq_check()
|
||||||
|
|
||||||
|
checks = [
|
||||||
|
config_result.check,
|
||||||
|
session_check,
|
||||||
|
runtime_audio_check,
|
||||||
|
_audio_input_check(config_result.cfg, resolved_path, input_devices),
|
||||||
|
_hotkey_check(config_result.cfg, resolved_path, session_check),
|
||||||
|
_injection_backend_check(config_result.cfg, resolved_path, session_check),
|
||||||
|
service_prereq,
|
||||||
|
]
|
||||||
return DiagnosticReport(checks=checks)
|
return DiagnosticReport(checks=checks)
|
||||||
|
|
||||||
|
|
||||||
def _audio_check(cfg: Config | None) -> list[DiagnosticCheck]:
|
def run_self_check(config_path: str | None) -> DiagnosticReport:
|
||||||
if cfg is None:
|
resolved_path = _resolved_config_path(config_path)
|
||||||
return [
|
doctor_report = run_doctor(config_path)
|
||||||
|
checks = list(doctor_report.checks)
|
||||||
|
by_id = {check.id: check for check in checks}
|
||||||
|
|
||||||
|
model_check = _managed_model_check(resolved_path)
|
||||||
|
cache_check = _cache_writable_check(resolved_path)
|
||||||
|
unit_check = _service_unit_check(by_id["service.prereq"])
|
||||||
|
state_check = _service_state_check(by_id["service.prereq"], unit_check)
|
||||||
|
startup_check = _startup_readiness_check(
|
||||||
|
config=_config_from_checks(checks),
|
||||||
|
config_path=resolved_path,
|
||||||
|
model_check=model_check,
|
||||||
|
cache_check=cache_check,
|
||||||
|
)
|
||||||
|
|
||||||
|
checks.extend([model_check, cache_check, unit_check, state_check, startup_check])
|
||||||
|
return DiagnosticReport(checks=checks)
|
||||||
|
|
||||||
|
|
||||||
|
def run_diagnostics(config_path: str | None) -> DiagnosticReport:
|
||||||
|
return run_doctor(config_path)
|
||||||
|
|
||||||
|
|
||||||
|
def _resolved_config_path(config_path: str | Path | None) -> Path:
|
||||||
|
if config_path:
|
||||||
|
return Path(config_path)
|
||||||
|
return DEFAULT_CONFIG_PATH
|
||||||
|
|
||||||
|
|
||||||
|
def _config_from_checks(checks: list[DiagnosticCheck]) -> Config | None:
|
||||||
|
for check in checks:
|
||||||
|
cfg = getattr(check, "_diagnostic_cfg", None)
|
||||||
|
if cfg is not None:
|
||||||
|
return cfg
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _load_config_check(config_path: Path) -> _ConfigLoadResult:
|
||||||
|
if not config_path.exists():
|
||||||
|
return _ConfigLoadResult(
|
||||||
|
check=DiagnosticCheck(
|
||||||
|
id="config.load",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message=f"config file does not exist at {config_path}",
|
||||||
|
next_step=(
|
||||||
|
f"run `{run_command(config_path)}` once to open Settings, "
|
||||||
|
"or run `aman init --force` for automation"
|
||||||
|
),
|
||||||
|
),
|
||||||
|
cfg=None,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
cfg = load_existing(str(config_path))
|
||||||
|
except Exception as exc:
|
||||||
|
return _ConfigLoadResult(
|
||||||
|
check=DiagnosticCheck(
|
||||||
|
id="config.load",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message=f"failed to load config from {config_path}: {exc}",
|
||||||
|
next_step=(
|
||||||
|
f"fix {config_path} from Settings or rerun `{doctor_command(config_path)}` "
|
||||||
|
"after correcting the config"
|
||||||
|
),
|
||||||
|
),
|
||||||
|
cfg=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
check = DiagnosticCheck(
|
||||||
|
id="config.load",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message=f"loaded config from {config_path}",
|
||||||
|
)
|
||||||
|
setattr(check, "_diagnostic_cfg", cfg)
|
||||||
|
return _ConfigLoadResult(check=check, cfg=cfg)
|
||||||
|
|
||||||
|
|
||||||
|
def _session_check() -> DiagnosticCheck:
|
||||||
|
session_type = os.getenv("XDG_SESSION_TYPE", "").strip().lower()
|
||||||
|
if session_type == "wayland" or os.getenv("WAYLAND_DISPLAY"):
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="session.x11",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message="Wayland session detected; Aman supports X11 only",
|
||||||
|
next_step="log into an X11 session and rerun diagnostics",
|
||||||
|
)
|
||||||
|
display = os.getenv("DISPLAY", "").strip()
|
||||||
|
if not display:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="session.x11",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message="DISPLAY is not set; no X11 desktop session is available",
|
||||||
|
next_step="run diagnostics from the same X11 user session that will run Aman",
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="session.x11",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message=f"X11 session detected on DISPLAY={display}",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _runtime_audio_check(config_path: Path) -> tuple[DiagnosticCheck, list[dict]]:
|
||||||
|
try:
|
||||||
|
devices = list_input_devices()
|
||||||
|
except Exception as exc:
|
||||||
|
return (
|
||||||
DiagnosticCheck(
|
DiagnosticCheck(
|
||||||
id="audio.input",
|
id="runtime.audio",
|
||||||
ok=False,
|
status=STATUS_FAIL,
|
||||||
message="skipped because config failed to load",
|
message=f"audio runtime is unavailable: {exc}",
|
||||||
hint="fix config.load first",
|
next_step=(
|
||||||
)
|
f"install the PortAudio runtime dependencies, then rerun `{doctor_command(config_path)}`"
|
||||||
]
|
),
|
||||||
|
),
|
||||||
|
[],
|
||||||
|
)
|
||||||
|
if not devices:
|
||||||
|
return (
|
||||||
|
DiagnosticCheck(
|
||||||
|
id="runtime.audio",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="audio runtime is available but no input devices were detected",
|
||||||
|
next_step="connect a microphone or fix the system input device, then rerun diagnostics",
|
||||||
|
),
|
||||||
|
devices,
|
||||||
|
)
|
||||||
|
return (
|
||||||
|
DiagnosticCheck(
|
||||||
|
id="runtime.audio",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message=f"audio runtime is available with {len(devices)} input device(s)",
|
||||||
|
),
|
||||||
|
devices,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _audio_input_check(
|
||||||
|
cfg: Config | None,
|
||||||
|
config_path: Path,
|
||||||
|
input_devices: list[dict],
|
||||||
|
) -> DiagnosticCheck:
|
||||||
|
if cfg is None:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="audio.input",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="skipped until config.load is ready",
|
||||||
|
next_step=f"fix config.load first, then rerun `{doctor_command(config_path)}`",
|
||||||
|
)
|
||||||
input_spec = cfg.recording.input
|
input_spec = cfg.recording.input
|
||||||
explicit = input_spec is not None and (not isinstance(input_spec, str) or bool(input_spec.strip()))
|
explicit = input_spec is not None and (
|
||||||
|
not isinstance(input_spec, str) or bool(input_spec.strip())
|
||||||
|
)
|
||||||
device = resolve_input_device(input_spec)
|
device = resolve_input_device(input_spec)
|
||||||
if device is None and explicit:
|
if device is None and explicit:
|
||||||
return [
|
return DiagnosticCheck(
|
||||||
DiagnosticCheck(
|
id="audio.input",
|
||||||
id="audio.input",
|
status=STATUS_FAIL,
|
||||||
ok=False,
|
message=f"recording input '{input_spec}' is not resolvable",
|
||||||
message=f"recording input '{input_spec}' is not resolvable",
|
next_step="choose a valid recording.input in Settings or set it to a visible input device",
|
||||||
hint="set recording.input to a valid device index or matching device name",
|
)
|
||||||
)
|
if device is None and not input_devices:
|
||||||
]
|
return DiagnosticCheck(
|
||||||
|
id="audio.input",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="recording input is unset and there is no default input device yet",
|
||||||
|
next_step="connect a microphone or choose a recording.input in Settings",
|
||||||
|
)
|
||||||
if device is None:
|
if device is None:
|
||||||
return [
|
return DiagnosticCheck(
|
||||||
DiagnosticCheck(
|
id="audio.input",
|
||||||
id="audio.input",
|
status=STATUS_OK,
|
||||||
ok=True,
|
message="recording input is unset; Aman will use the default system input",
|
||||||
message="recording input is unset; default system input will be used",
|
)
|
||||||
)
|
return DiagnosticCheck(
|
||||||
]
|
id="audio.input",
|
||||||
return [DiagnosticCheck(id="audio.input", ok=True, message=f"resolved recording input to device {device}")]
|
status=STATUS_OK,
|
||||||
|
message=f"resolved recording input to device {device}",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _hotkey_check(cfg: Config | None) -> list[DiagnosticCheck]:
|
def _hotkey_check(
|
||||||
|
cfg: Config | None,
|
||||||
|
config_path: Path,
|
||||||
|
session_check: DiagnosticCheck,
|
||||||
|
) -> DiagnosticCheck:
|
||||||
if cfg is None:
|
if cfg is None:
|
||||||
return [
|
return DiagnosticCheck(
|
||||||
DiagnosticCheck(
|
id="hotkey.parse",
|
||||||
id="hotkey.parse",
|
status=STATUS_WARN,
|
||||||
ok=False,
|
message="skipped until config.load is ready",
|
||||||
message="skipped because config failed to load",
|
next_step=f"fix config.load first, then rerun `{doctor_command(config_path)}`",
|
||||||
hint="fix config.load first",
|
)
|
||||||
)
|
if session_check.status == STATUS_FAIL:
|
||||||
]
|
return DiagnosticCheck(
|
||||||
|
id="hotkey.parse",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="skipped until session.x11 is ready",
|
||||||
|
next_step="fix session.x11 first, then rerun diagnostics",
|
||||||
|
)
|
||||||
try:
|
try:
|
||||||
desktop = get_desktop_adapter()
|
desktop = get_desktop_adapter()
|
||||||
desktop.validate_hotkey(cfg.daemon.hotkey)
|
desktop.validate_hotkey(cfg.daemon.hotkey)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
return [
|
return DiagnosticCheck(
|
||||||
DiagnosticCheck(
|
id="hotkey.parse",
|
||||||
id="hotkey.parse",
|
status=STATUS_FAIL,
|
||||||
ok=False,
|
message=f"hotkey '{cfg.daemon.hotkey}' is not available: {exc}",
|
||||||
message=f"hotkey '{cfg.daemon.hotkey}' is not available: {exc}",
|
next_step="choose a different daemon.hotkey in Settings, then rerun diagnostics",
|
||||||
hint="pick another daemon.hotkey such as Super+m",
|
)
|
||||||
)
|
return DiagnosticCheck(
|
||||||
]
|
id="hotkey.parse",
|
||||||
return [DiagnosticCheck(id="hotkey.parse", ok=True, message=f"hotkey '{cfg.daemon.hotkey}' is valid")]
|
status=STATUS_OK,
|
||||||
|
message=f"hotkey '{cfg.daemon.hotkey}' is available",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _injection_backend_check(cfg: Config | None) -> list[DiagnosticCheck]:
|
def _injection_backend_check(
|
||||||
|
cfg: Config | None,
|
||||||
|
config_path: Path,
|
||||||
|
session_check: DiagnosticCheck,
|
||||||
|
) -> DiagnosticCheck:
|
||||||
if cfg is None:
|
if cfg is None:
|
||||||
return [
|
return DiagnosticCheck(
|
||||||
DiagnosticCheck(
|
|
||||||
id="injection.backend",
|
|
||||||
ok=False,
|
|
||||||
message="skipped because config failed to load",
|
|
||||||
hint="fix config.load first",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
return [
|
|
||||||
DiagnosticCheck(
|
|
||||||
id="injection.backend",
|
id="injection.backend",
|
||||||
ok=True,
|
status=STATUS_WARN,
|
||||||
message=f"injection backend '{cfg.injection.backend}' is configured",
|
message="skipped until config.load is ready",
|
||||||
|
next_step=f"fix config.load first, then rerun `{doctor_command(config_path)}`",
|
||||||
)
|
)
|
||||||
]
|
if session_check.status == STATUS_FAIL:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="injection.backend",
|
||||||
def _provider_check(cfg: Config | None) -> list[DiagnosticCheck]:
|
status=STATUS_WARN,
|
||||||
if cfg is None:
|
message="skipped until session.x11 is ready",
|
||||||
return [
|
next_step="fix session.x11 first, then rerun diagnostics",
|
||||||
DiagnosticCheck(
|
|
||||||
id="provider.runtime",
|
|
||||||
ok=False,
|
|
||||||
message="skipped because config failed to load",
|
|
||||||
hint="fix config.load first",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
return [
|
|
||||||
DiagnosticCheck(
|
|
||||||
id="provider.runtime",
|
|
||||||
ok=True,
|
|
||||||
message=f"stt={cfg.stt.provider}, editor=local_llama_builtin",
|
|
||||||
)
|
)
|
||||||
]
|
if cfg.injection.backend == "clipboard":
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="injection.backend",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message="clipboard injection is configured for X11",
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="injection.backend",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message=f"X11 key injection backend '{cfg.injection.backend}' is configured",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _model_check(cfg: Config | None) -> list[DiagnosticCheck]:
|
def _service_prereq_check() -> DiagnosticCheck:
|
||||||
if cfg is None:
|
if shutil.which("systemctl") is None:
|
||||||
return [
|
return DiagnosticCheck(
|
||||||
DiagnosticCheck(
|
id="service.prereq",
|
||||||
id="model.cache",
|
status=STATUS_FAIL,
|
||||||
ok=False,
|
message="systemctl is not available; supported daily use requires systemd --user",
|
||||||
message="skipped because config failed to load",
|
next_step="install or use a systemd --user session for the supported Aman service mode",
|
||||||
hint="fix config.load first",
|
)
|
||||||
)
|
result = _run_systemctl_user(["is-system-running"])
|
||||||
]
|
state = (result.stdout or "").strip()
|
||||||
if cfg.models.allow_custom_models and cfg.models.whisper_model_path.strip():
|
stderr = (result.stderr or "").strip()
|
||||||
path = Path(cfg.models.whisper_model_path)
|
if result.returncode == 0 and state == "running":
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.prereq",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message="systemd --user is available (state=running)",
|
||||||
|
)
|
||||||
|
if state == "degraded":
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.prereq",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="systemd --user is available but degraded",
|
||||||
|
next_step="check your user services and rerun diagnostics before relying on service mode",
|
||||||
|
)
|
||||||
|
if stderr:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.prereq",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message=f"systemd --user is unavailable: {stderr}",
|
||||||
|
next_step="log into a systemd --user session, then rerun diagnostics",
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.prereq",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message=f"systemd --user reported state '{state or 'unknown'}'",
|
||||||
|
next_step="verify the user service manager is healthy before relying on service mode",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _managed_model_check(config_path: Path) -> DiagnosticCheck:
|
||||||
|
result = probe_managed_model()
|
||||||
|
if result.status == "ready":
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="model.cache",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message=result.message,
|
||||||
|
)
|
||||||
|
if result.status == "missing":
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="model.cache",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message=result.message,
|
||||||
|
next_step=(
|
||||||
|
"start Aman once on a networked connection so it can download the managed editor model, "
|
||||||
|
f"then rerun `{self_check_command(config_path)}`"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="model.cache",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message=result.message,
|
||||||
|
next_step=(
|
||||||
|
"remove the corrupted managed model cache and rerun Aman on a networked connection, "
|
||||||
|
f"then rerun `{self_check_command(config_path)}`"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _cache_writable_check(config_path: Path) -> DiagnosticCheck:
|
||||||
|
target = MODEL_DIR
|
||||||
|
probe_path = target
|
||||||
|
while not probe_path.exists() and probe_path != probe_path.parent:
|
||||||
|
probe_path = probe_path.parent
|
||||||
|
if os.access(probe_path, os.W_OK):
|
||||||
|
message = (
|
||||||
|
f"managed model cache directory is writable at {target}"
|
||||||
|
if target.exists()
|
||||||
|
else f"managed model cache can be created under {probe_path}"
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="cache.writable",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message=message,
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="cache.writable",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message=f"managed model cache is not writable under {probe_path}",
|
||||||
|
next_step=(
|
||||||
|
f"fix write permissions for {MODEL_DIR}, then rerun `{self_check_command(config_path)}`"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _service_unit_check(service_prereq: DiagnosticCheck) -> DiagnosticCheck:
|
||||||
|
if service_prereq.status == STATUS_FAIL:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.unit",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="skipped until service.prereq is ready",
|
||||||
|
next_step="fix service.prereq first, then rerun self-check",
|
||||||
|
)
|
||||||
|
result = _run_systemctl_user(
|
||||||
|
["show", SERVICE_NAME, "--property=FragmentPath", "--value"]
|
||||||
|
)
|
||||||
|
fragment_path = (result.stdout or "").strip()
|
||||||
|
if result.returncode == 0 and fragment_path:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.unit",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message=f"user service unit is installed at {fragment_path}",
|
||||||
|
)
|
||||||
|
stderr = (result.stderr or "").strip()
|
||||||
|
if stderr:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.unit",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message=f"user service unit is unavailable: {stderr}",
|
||||||
|
next_step="rerun the portable install or reinstall the package-provided user service",
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.unit",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message="user service unit is not installed for aman",
|
||||||
|
next_step="rerun the portable install or reinstall the package-provided user service",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _service_state_check(
|
||||||
|
service_prereq: DiagnosticCheck,
|
||||||
|
service_unit: DiagnosticCheck,
|
||||||
|
) -> DiagnosticCheck:
|
||||||
|
if service_prereq.status == STATUS_FAIL or service_unit.status == STATUS_FAIL:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.state",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="skipped until service.prereq and service.unit are ready",
|
||||||
|
next_step="fix the service prerequisites first, then rerun self-check",
|
||||||
|
)
|
||||||
|
|
||||||
|
enabled_result = _run_systemctl_user(["is-enabled", SERVICE_NAME])
|
||||||
|
active_result = _run_systemctl_user(["is-active", SERVICE_NAME])
|
||||||
|
enabled = (enabled_result.stdout or enabled_result.stderr or "").strip()
|
||||||
|
active = (active_result.stdout or active_result.stderr or "").strip()
|
||||||
|
|
||||||
|
if enabled == "enabled" and active == "active":
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.state",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message="user service is enabled and active",
|
||||||
|
)
|
||||||
|
if active == "failed":
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.state",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message="user service is installed but failed to start",
|
||||||
|
next_step=f"inspect `{journalctl_command()}` to see why aman.service is failing",
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="service.state",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message=f"user service state is enabled={enabled or 'unknown'} active={active or 'unknown'}",
|
||||||
|
next_step=f"run `systemctl --user enable --now {SERVICE_NAME}` and rerun self-check",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _startup_readiness_check(
|
||||||
|
config: Config | None,
|
||||||
|
config_path: Path,
|
||||||
|
model_check: DiagnosticCheck,
|
||||||
|
cache_check: DiagnosticCheck,
|
||||||
|
) -> DiagnosticCheck:
|
||||||
|
if config is None:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="startup.readiness",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="skipped until config.load is ready",
|
||||||
|
next_step=f"fix config.load first, then rerun `{self_check_command(config_path)}`",
|
||||||
|
)
|
||||||
|
|
||||||
|
custom_path = config.models.whisper_model_path.strip()
|
||||||
|
if custom_path:
|
||||||
|
path = Path(custom_path)
|
||||||
if not path.exists():
|
if not path.exists():
|
||||||
return [
|
return DiagnosticCheck(
|
||||||
DiagnosticCheck(
|
id="startup.readiness",
|
||||||
id="model.cache",
|
status=STATUS_FAIL,
|
||||||
ok=False,
|
message=f"custom Whisper model path does not exist: {path}",
|
||||||
message=f"custom whisper model path does not exist: {path}",
|
next_step="fix models.whisper_model_path or disable custom model paths in Settings",
|
||||||
hint="fix models.whisper_model_path or disable custom model paths",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
try:
|
|
||||||
model_path = ensure_model()
|
|
||||||
return [DiagnosticCheck(id="model.cache", ok=True, message=f"editor model is ready at {model_path}")]
|
|
||||||
except Exception as exc:
|
|
||||||
return [
|
|
||||||
DiagnosticCheck(
|
|
||||||
id="model.cache",
|
|
||||||
ok=False,
|
|
||||||
message=f"model is not ready: {exc}",
|
|
||||||
hint="check internet access and writable cache directory",
|
|
||||||
)
|
)
|
||||||
]
|
|
||||||
|
try:
|
||||||
|
from faster_whisper import WhisperModel # type: ignore[import-not-found]
|
||||||
|
_ = WhisperModel
|
||||||
|
except ModuleNotFoundError as exc:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="startup.readiness",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message=f"Whisper runtime is unavailable: {exc}",
|
||||||
|
next_step="install Aman's Python runtime dependencies, then rerun self-check",
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
_load_llama_bindings()
|
||||||
|
except Exception as exc:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="startup.readiness",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message=f"editor runtime is unavailable: {exc}",
|
||||||
|
next_step="install llama-cpp-python and rerun self-check",
|
||||||
|
)
|
||||||
|
|
||||||
|
if cache_check.status == STATUS_FAIL:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="startup.readiness",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message="startup is blocked because the managed model cache is not writable",
|
||||||
|
next_step=cache_check.next_step,
|
||||||
|
)
|
||||||
|
if model_check.status == STATUS_FAIL:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="startup.readiness",
|
||||||
|
status=STATUS_FAIL,
|
||||||
|
message="startup is blocked because the managed editor model cache is invalid",
|
||||||
|
next_step=model_check.next_step,
|
||||||
|
)
|
||||||
|
if model_check.status == STATUS_WARN:
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="startup.readiness",
|
||||||
|
status=STATUS_WARN,
|
||||||
|
message="startup prerequisites are present, but offline startup is not ready until the managed model is cached",
|
||||||
|
next_step=model_check.next_step,
|
||||||
|
)
|
||||||
|
return DiagnosticCheck(
|
||||||
|
id="startup.readiness",
|
||||||
|
status=STATUS_OK,
|
||||||
|
message="startup prerequisites are ready without requiring downloads",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _resolved_config_path(config_path: str | None) -> Path:
|
def _run_systemctl_user(args: list[str]) -> subprocess.CompletedProcess[str]:
|
||||||
from constants import DEFAULT_CONFIG_PATH
|
return subprocess.run(
|
||||||
|
["systemctl", "--user", *args],
|
||||||
return Path(config_path) if config_path else DEFAULT_CONFIG_PATH
|
text=True,
|
||||||
|
capture_output=True,
|
||||||
|
check=False,
|
||||||
|
)
|
||||||
|
|
|
||||||
|
|
@ -24,6 +24,7 @@ from aiprocess import (
|
||||||
_profile_generation_kwargs,
|
_profile_generation_kwargs,
|
||||||
_supports_response_format,
|
_supports_response_format,
|
||||||
ensure_model,
|
ensure_model,
|
||||||
|
probe_managed_model,
|
||||||
)
|
)
|
||||||
from constants import MODEL_SHA256
|
from constants import MODEL_SHA256
|
||||||
|
|
||||||
|
|
@ -325,6 +326,42 @@ class EnsureModelTests(unittest.TestCase):
|
||||||
):
|
):
|
||||||
ensure_model()
|
ensure_model()
|
||||||
|
|
||||||
|
def test_probe_managed_model_is_read_only_for_valid_cache(self):
|
||||||
|
payload = b"valid-model"
|
||||||
|
checksum = sha256(payload).hexdigest()
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
model_path = Path(td) / "model.gguf"
|
||||||
|
model_path.write_bytes(payload)
|
||||||
|
with patch.object(aiprocess, "MODEL_PATH", model_path), patch.object(
|
||||||
|
aiprocess, "MODEL_SHA256", checksum
|
||||||
|
), patch("aiprocess.urllib.request.urlopen") as urlopen:
|
||||||
|
result = probe_managed_model()
|
||||||
|
|
||||||
|
self.assertEqual(result.status, "ready")
|
||||||
|
self.assertIn("ready", result.message)
|
||||||
|
urlopen.assert_not_called()
|
||||||
|
|
||||||
|
def test_probe_managed_model_reports_missing_cache(self):
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
model_path = Path(td) / "model.gguf"
|
||||||
|
with patch.object(aiprocess, "MODEL_PATH", model_path):
|
||||||
|
result = probe_managed_model()
|
||||||
|
|
||||||
|
self.assertEqual(result.status, "missing")
|
||||||
|
self.assertIn(str(model_path), result.message)
|
||||||
|
|
||||||
|
def test_probe_managed_model_reports_invalid_checksum(self):
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
model_path = Path(td) / "model.gguf"
|
||||||
|
model_path.write_bytes(b"bad-model")
|
||||||
|
with patch.object(aiprocess, "MODEL_PATH", model_path), patch.object(
|
||||||
|
aiprocess, "MODEL_SHA256", "f" * 64
|
||||||
|
):
|
||||||
|
result = probe_managed_model()
|
||||||
|
|
||||||
|
self.assertEqual(result.status, "invalid")
|
||||||
|
self.assertIn("checksum mismatch", result.message)
|
||||||
|
|
||||||
|
|
||||||
class ExternalApiProcessorTests(unittest.TestCase):
|
class ExternalApiProcessorTests(unittest.TestCase):
|
||||||
def test_requires_api_key_env_var(self):
|
def test_requires_api_key_env_var(self):
|
||||||
|
|
|
||||||
|
|
@ -47,6 +47,18 @@ class FakeDesktop:
|
||||||
self.quit_calls += 1
|
self.quit_calls += 1
|
||||||
|
|
||||||
|
|
||||||
|
class FailingInjectDesktop(FakeDesktop):
|
||||||
|
def inject_text(
|
||||||
|
self,
|
||||||
|
text: str,
|
||||||
|
backend: str,
|
||||||
|
*,
|
||||||
|
remove_transcription_from_clipboard: bool = False,
|
||||||
|
) -> None:
|
||||||
|
_ = (text, backend, remove_transcription_from_clipboard)
|
||||||
|
raise RuntimeError("xtest unavailable")
|
||||||
|
|
||||||
|
|
||||||
class FakeSegment:
|
class FakeSegment:
|
||||||
def __init__(self, text: str):
|
def __init__(self, text: str):
|
||||||
self.text = text
|
self.text = text
|
||||||
|
|
@ -517,6 +529,37 @@ class DaemonTests(unittest.TestCase):
|
||||||
self.assertEqual(stream.stop_calls, 1)
|
self.assertEqual(stream.stop_calls, 1)
|
||||||
self.assertEqual(stream.close_calls, 1)
|
self.assertEqual(stream.close_calls, 1)
|
||||||
|
|
||||||
|
@patch("aman.start_audio_recording", side_effect=RuntimeError("device missing"))
|
||||||
|
def test_record_start_failure_logs_actionable_issue(self, _start_mock):
|
||||||
|
desktop = FakeDesktop()
|
||||||
|
daemon = self._build_daemon(desktop, FakeModel(), verbose=False)
|
||||||
|
|
||||||
|
with self.assertLogs(level="ERROR") as logs:
|
||||||
|
daemon.toggle()
|
||||||
|
|
||||||
|
rendered = "\n".join(logs.output)
|
||||||
|
self.assertIn("audio.input: record start failed: device missing", rendered)
|
||||||
|
self.assertIn("next_step: run `aman doctor --config", rendered)
|
||||||
|
|
||||||
|
@patch("aman.stop_audio_recording", return_value=FakeAudio(8))
|
||||||
|
@patch("aman.start_audio_recording", return_value=(object(), object()))
|
||||||
|
def test_output_failure_logs_actionable_issue(self, _start_mock, _stop_mock):
|
||||||
|
desktop = FailingInjectDesktop()
|
||||||
|
daemon = self._build_daemon(desktop, FakeModel(), verbose=False)
|
||||||
|
daemon._start_stop_worker = (
|
||||||
|
lambda stream, record, trigger, process_audio: daemon._stop_and_process(
|
||||||
|
stream, record, trigger, process_audio
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
with self.assertLogs(level="ERROR") as logs:
|
||||||
|
daemon.toggle()
|
||||||
|
daemon.toggle()
|
||||||
|
|
||||||
|
rendered = "\n".join(logs.output)
|
||||||
|
self.assertIn("injection.backend: output failed: xtest unavailable", rendered)
|
||||||
|
self.assertIn("next_step: run `aman doctor --config", rendered)
|
||||||
|
|
||||||
@patch("aman.stop_audio_recording", return_value=FakeAudio(8))
|
@patch("aman.stop_audio_recording", return_value=FakeAudio(8))
|
||||||
@patch("aman.start_audio_recording", return_value=(object(), object()))
|
@patch("aman.start_audio_recording", return_value=(object(), object()))
|
||||||
def test_ai_processor_receives_active_profile(self, _start_mock, _stop_mock):
|
def test_ai_processor_receives_active_profile(self, _start_mock, _stop_mock):
|
||||||
|
|
|
||||||
|
|
@ -52,10 +52,17 @@ class _FakeDesktop:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
||||||
|
class _HotkeyFailDesktop(_FakeDesktop):
|
||||||
|
def start_hotkey_listener(self, hotkey, callback):
|
||||||
|
_ = (hotkey, callback)
|
||||||
|
raise RuntimeError("already in use")
|
||||||
|
|
||||||
|
|
||||||
class _FakeDaemon:
|
class _FakeDaemon:
|
||||||
def __init__(self, cfg, _desktop, *, verbose=False):
|
def __init__(self, cfg, _desktop, *, verbose=False, config_path=None):
|
||||||
self.cfg = cfg
|
self.cfg = cfg
|
||||||
self.verbose = verbose
|
self.verbose = verbose
|
||||||
|
self.config_path = config_path
|
||||||
self._paused = False
|
self._paused = False
|
||||||
|
|
||||||
def get_state(self):
|
def get_state(self):
|
||||||
|
|
@ -215,29 +222,58 @@ class AmanCliTests(unittest.TestCase):
|
||||||
|
|
||||||
def test_doctor_command_json_output_and_exit_code(self):
|
def test_doctor_command_json_output_and_exit_code(self):
|
||||||
report = DiagnosticReport(
|
report = DiagnosticReport(
|
||||||
checks=[DiagnosticCheck(id="config.load", ok=True, message="ok", hint="")]
|
checks=[DiagnosticCheck(id="config.load", status="ok", message="ok", next_step="")]
|
||||||
)
|
)
|
||||||
args = aman._parse_cli_args(["doctor", "--json"])
|
args = aman._parse_cli_args(["doctor", "--json"])
|
||||||
out = io.StringIO()
|
out = io.StringIO()
|
||||||
with patch("aman.run_diagnostics", return_value=report), patch("sys.stdout", out):
|
with patch("aman.run_doctor", return_value=report), patch("sys.stdout", out):
|
||||||
exit_code = aman._doctor_command(args)
|
exit_code = aman._doctor_command(args)
|
||||||
|
|
||||||
self.assertEqual(exit_code, 0)
|
self.assertEqual(exit_code, 0)
|
||||||
payload = json.loads(out.getvalue())
|
payload = json.loads(out.getvalue())
|
||||||
self.assertTrue(payload["ok"])
|
self.assertTrue(payload["ok"])
|
||||||
|
self.assertEqual(payload["status"], "ok")
|
||||||
self.assertEqual(payload["checks"][0]["id"], "config.load")
|
self.assertEqual(payload["checks"][0]["id"], "config.load")
|
||||||
|
|
||||||
def test_doctor_command_failed_report_returns_exit_code_2(self):
|
def test_doctor_command_failed_report_returns_exit_code_2(self):
|
||||||
report = DiagnosticReport(
|
report = DiagnosticReport(
|
||||||
checks=[DiagnosticCheck(id="config.load", ok=False, message="broken", hint="fix")]
|
checks=[DiagnosticCheck(id="config.load", status="fail", message="broken", next_step="fix")]
|
||||||
)
|
)
|
||||||
args = aman._parse_cli_args(["doctor"])
|
args = aman._parse_cli_args(["doctor"])
|
||||||
out = io.StringIO()
|
out = io.StringIO()
|
||||||
with patch("aman.run_diagnostics", return_value=report), patch("sys.stdout", out):
|
with patch("aman.run_doctor", return_value=report), patch("sys.stdout", out):
|
||||||
exit_code = aman._doctor_command(args)
|
exit_code = aman._doctor_command(args)
|
||||||
|
|
||||||
self.assertEqual(exit_code, 2)
|
self.assertEqual(exit_code, 2)
|
||||||
self.assertIn("[FAIL] config.load", out.getvalue())
|
self.assertIn("[FAIL] config.load", out.getvalue())
|
||||||
|
self.assertIn("overall: fail", out.getvalue())
|
||||||
|
|
||||||
|
def test_doctor_command_warning_report_returns_exit_code_0(self):
|
||||||
|
report = DiagnosticReport(
|
||||||
|
checks=[DiagnosticCheck(id="model.cache", status="warn", message="missing", next_step="run aman once")]
|
||||||
|
)
|
||||||
|
args = aman._parse_cli_args(["doctor"])
|
||||||
|
out = io.StringIO()
|
||||||
|
with patch("aman.run_doctor", return_value=report), patch("sys.stdout", out):
|
||||||
|
exit_code = aman._doctor_command(args)
|
||||||
|
|
||||||
|
self.assertEqual(exit_code, 0)
|
||||||
|
self.assertIn("[WARN] model.cache", out.getvalue())
|
||||||
|
self.assertIn("overall: warn", out.getvalue())
|
||||||
|
|
||||||
|
def test_self_check_command_uses_self_check_runner(self):
|
||||||
|
report = DiagnosticReport(
|
||||||
|
checks=[DiagnosticCheck(id="startup.readiness", status="ok", message="ready", next_step="")]
|
||||||
|
)
|
||||||
|
args = aman._parse_cli_args(["self-check", "--json"])
|
||||||
|
out = io.StringIO()
|
||||||
|
with patch("aman.run_self_check", return_value=report) as runner, patch("sys.stdout", out):
|
||||||
|
exit_code = aman._self_check_command(args)
|
||||||
|
|
||||||
|
self.assertEqual(exit_code, 0)
|
||||||
|
runner.assert_called_once_with("")
|
||||||
|
payload = json.loads(out.getvalue())
|
||||||
|
self.assertEqual(payload["status"], "ok")
|
||||||
|
|
||||||
def test_bench_command_json_output(self):
|
def test_bench_command_json_output(self):
|
||||||
args = aman._parse_cli_args(["bench", "--text", "hello", "--repeat", "2", "--warmup", "0", "--json"])
|
args = aman._parse_cli_args(["bench", "--text", "hello", "--repeat", "2", "--warmup", "0", "--json"])
|
||||||
|
|
@ -583,6 +619,42 @@ class AmanCliTests(unittest.TestCase):
|
||||||
self.assertTrue(path.exists())
|
self.assertTrue(path.exists())
|
||||||
self.assertEqual(desktop.settings_invocations, 1)
|
self.assertEqual(desktop.settings_invocations, 1)
|
||||||
|
|
||||||
|
def test_run_command_hotkey_failure_logs_actionable_issue(self):
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
path = Path(td) / "config.json"
|
||||||
|
path.write_text(json.dumps({"config_version": 1}) + "\n", encoding="utf-8")
|
||||||
|
args = aman._parse_cli_args(["run", "--config", str(path)])
|
||||||
|
desktop = _HotkeyFailDesktop()
|
||||||
|
with patch("aman._lock_single_instance", return_value=object()), patch(
|
||||||
|
"aman.get_desktop_adapter", return_value=desktop
|
||||||
|
), patch("aman.load", return_value=Config()), patch("aman.Daemon", _FakeDaemon), self.assertLogs(
|
||||||
|
level="ERROR"
|
||||||
|
) as logs:
|
||||||
|
exit_code = aman._run_command(args)
|
||||||
|
|
||||||
|
self.assertEqual(exit_code, 1)
|
||||||
|
rendered = "\n".join(logs.output)
|
||||||
|
self.assertIn("hotkey.parse: hotkey setup failed: already in use", rendered)
|
||||||
|
self.assertIn("next_step: run `aman doctor --config", rendered)
|
||||||
|
|
||||||
|
def test_run_command_daemon_init_failure_logs_self_check_next_step(self):
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
path = Path(td) / "config.json"
|
||||||
|
path.write_text(json.dumps({"config_version": 1}) + "\n", encoding="utf-8")
|
||||||
|
args = aman._parse_cli_args(["run", "--config", str(path)])
|
||||||
|
desktop = _FakeDesktop()
|
||||||
|
with patch("aman._lock_single_instance", return_value=object()), patch(
|
||||||
|
"aman.get_desktop_adapter", return_value=desktop
|
||||||
|
), patch("aman.load", return_value=Config()), patch(
|
||||||
|
"aman.Daemon", side_effect=RuntimeError("warmup boom")
|
||||||
|
), self.assertLogs(level="ERROR") as logs:
|
||||||
|
exit_code = aman._run_command(args)
|
||||||
|
|
||||||
|
self.assertEqual(exit_code, 1)
|
||||||
|
rendered = "\n".join(logs.output)
|
||||||
|
self.assertIn("startup.readiness: startup failed: warmup boom", rendered)
|
||||||
|
self.assertIn("next_step: run `aman self-check --config", rendered)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,9 @@
|
||||||
import json
|
import json
|
||||||
import sys
|
import sys
|
||||||
|
import tempfile
|
||||||
import unittest
|
import unittest
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from types import SimpleNamespace
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
ROOT = Path(__file__).resolve().parents[1]
|
ROOT = Path(__file__).resolve().parents[1]
|
||||||
|
|
@ -10,7 +12,13 @@ if str(SRC) not in sys.path:
|
||||||
sys.path.insert(0, str(SRC))
|
sys.path.insert(0, str(SRC))
|
||||||
|
|
||||||
from config import Config
|
from config import Config
|
||||||
from diagnostics import DiagnosticCheck, DiagnosticReport, run_diagnostics
|
from diagnostics import (
|
||||||
|
DiagnosticCheck,
|
||||||
|
DiagnosticReport,
|
||||||
|
run_doctor,
|
||||||
|
run_diagnostics,
|
||||||
|
run_self_check,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class _FakeDesktop:
|
class _FakeDesktop:
|
||||||
|
|
@ -18,59 +26,207 @@ class _FakeDesktop:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
||||||
class DiagnosticsTests(unittest.TestCase):
|
class _Result:
|
||||||
def test_run_diagnostics_all_checks_pass(self):
|
def __init__(self, *, returncode: int = 0, stdout: str = "", stderr: str = ""):
|
||||||
cfg = Config()
|
self.returncode = returncode
|
||||||
with patch("diagnostics.load", return_value=cfg), patch(
|
self.stdout = stdout
|
||||||
"diagnostics.resolve_input_device", return_value=1
|
self.stderr = stderr
|
||||||
), patch("diagnostics.get_desktop_adapter", return_value=_FakeDesktop()), patch(
|
|
||||||
"diagnostics.ensure_model", return_value=Path("/tmp/model.gguf")
|
|
||||||
):
|
|
||||||
report = run_diagnostics("/tmp/config.json")
|
|
||||||
|
|
||||||
|
|
||||||
|
def _systemctl_side_effect(*results: _Result):
|
||||||
|
iterator = iter(results)
|
||||||
|
|
||||||
|
def _runner(_args):
|
||||||
|
return next(iterator)
|
||||||
|
|
||||||
|
return _runner
|
||||||
|
|
||||||
|
|
||||||
|
class DiagnosticsTests(unittest.TestCase):
|
||||||
|
def test_run_doctor_all_checks_pass(self):
|
||||||
|
cfg = Config()
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
config_path = Path(td) / "config.json"
|
||||||
|
config_path.write_text('{"config_version":1}\n', encoding="utf-8")
|
||||||
|
with patch.dict("os.environ", {"DISPLAY": ":0"}, clear=False), patch(
|
||||||
|
"diagnostics.load_existing", return_value=cfg
|
||||||
|
), patch("diagnostics.list_input_devices", return_value=[{"index": 1, "name": "Mic"}]), patch(
|
||||||
|
"diagnostics.resolve_input_device", return_value=1
|
||||||
|
), patch(
|
||||||
|
"diagnostics.get_desktop_adapter", return_value=_FakeDesktop()
|
||||||
|
), patch(
|
||||||
|
"diagnostics._run_systemctl_user",
|
||||||
|
return_value=_Result(returncode=0, stdout="running\n"),
|
||||||
|
), patch("diagnostics.probe_managed_model") as probe_model:
|
||||||
|
report = run_doctor(str(config_path))
|
||||||
|
|
||||||
|
self.assertEqual(report.status, "ok")
|
||||||
self.assertTrue(report.ok)
|
self.assertTrue(report.ok)
|
||||||
ids = [check.id for check in report.checks]
|
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
ids,
|
[check.id for check in report.checks],
|
||||||
[
|
[
|
||||||
"config.load",
|
"config.load",
|
||||||
|
"session.x11",
|
||||||
|
"runtime.audio",
|
||||||
"audio.input",
|
"audio.input",
|
||||||
"hotkey.parse",
|
"hotkey.parse",
|
||||||
"injection.backend",
|
"injection.backend",
|
||||||
"provider.runtime",
|
"service.prereq",
|
||||||
"model.cache",
|
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
self.assertTrue(all(check.ok for check in report.checks))
|
self.assertTrue(all(check.status == "ok" for check in report.checks))
|
||||||
|
probe_model.assert_not_called()
|
||||||
|
|
||||||
def test_run_diagnostics_marks_config_fail_and_skips_dependent_checks(self):
|
def test_run_doctor_missing_config_warns_without_writing(self):
|
||||||
with patch("diagnostics.load", side_effect=ValueError("broken config")), patch(
|
with tempfile.TemporaryDirectory() as td:
|
||||||
"diagnostics.ensure_model", return_value=Path("/tmp/model.gguf")
|
config_path = Path(td) / "config.json"
|
||||||
):
|
with patch.dict("os.environ", {"DISPLAY": ":0"}, clear=False), patch(
|
||||||
report = run_diagnostics("/tmp/config.json")
|
"diagnostics.list_input_devices", return_value=[]
|
||||||
|
), patch(
|
||||||
|
"diagnostics._run_systemctl_user",
|
||||||
|
return_value=_Result(returncode=0, stdout="running\n"),
|
||||||
|
):
|
||||||
|
report = run_doctor(str(config_path))
|
||||||
|
|
||||||
self.assertFalse(report.ok)
|
self.assertEqual(report.status, "warn")
|
||||||
results = {check.id: check for check in report.checks}
|
results = {check.id: check for check in report.checks}
|
||||||
self.assertFalse(results["config.load"].ok)
|
self.assertEqual(results["config.load"].status, "warn")
|
||||||
self.assertFalse(results["audio.input"].ok)
|
self.assertEqual(results["runtime.audio"].status, "warn")
|
||||||
self.assertFalse(results["hotkey.parse"].ok)
|
self.assertEqual(results["audio.input"].status, "warn")
|
||||||
self.assertFalse(results["injection.backend"].ok)
|
self.assertIn("open Settings", results["config.load"].next_step)
|
||||||
self.assertFalse(results["provider.runtime"].ok)
|
self.assertFalse(config_path.exists())
|
||||||
self.assertFalse(results["model.cache"].ok)
|
|
||||||
|
|
||||||
def test_report_json_schema(self):
|
def test_run_self_check_adds_deeper_readiness_checks(self):
|
||||||
|
cfg = Config()
|
||||||
|
model_path = Path("/tmp/model.gguf")
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
config_path = Path(td) / "config.json"
|
||||||
|
config_path.write_text('{"config_version":1}\n', encoding="utf-8")
|
||||||
|
with patch.dict("os.environ", {"DISPLAY": ":0"}, clear=False), patch(
|
||||||
|
"diagnostics.load_existing", return_value=cfg
|
||||||
|
), patch("diagnostics.list_input_devices", return_value=[{"index": 1, "name": "Mic"}]), patch(
|
||||||
|
"diagnostics.resolve_input_device", return_value=1
|
||||||
|
), patch(
|
||||||
|
"diagnostics.get_desktop_adapter", return_value=_FakeDesktop()
|
||||||
|
), patch(
|
||||||
|
"diagnostics._run_systemctl_user",
|
||||||
|
side_effect=_systemctl_side_effect(
|
||||||
|
_Result(returncode=0, stdout="running\n"),
|
||||||
|
_Result(returncode=0, stdout="/home/test/.config/systemd/user/aman.service\n"),
|
||||||
|
_Result(returncode=0, stdout="enabled\n"),
|
||||||
|
_Result(returncode=0, stdout="active\n"),
|
||||||
|
),
|
||||||
|
), patch(
|
||||||
|
"diagnostics.probe_managed_model",
|
||||||
|
return_value=SimpleNamespace(
|
||||||
|
status="ready",
|
||||||
|
path=model_path,
|
||||||
|
message=f"managed editor model is ready at {model_path}",
|
||||||
|
),
|
||||||
|
), patch(
|
||||||
|
"diagnostics.MODEL_DIR", model_path.parent
|
||||||
|
), patch(
|
||||||
|
"diagnostics.os.access", return_value=True
|
||||||
|
), patch(
|
||||||
|
"diagnostics._load_llama_bindings", return_value=(object(), object())
|
||||||
|
), patch.dict(
|
||||||
|
"sys.modules", {"faster_whisper": SimpleNamespace(WhisperModel=object())}
|
||||||
|
):
|
||||||
|
report = run_self_check(str(config_path))
|
||||||
|
|
||||||
|
self.assertEqual(report.status, "ok")
|
||||||
|
self.assertEqual(
|
||||||
|
[check.id for check in report.checks[-5:]],
|
||||||
|
[
|
||||||
|
"model.cache",
|
||||||
|
"cache.writable",
|
||||||
|
"service.unit",
|
||||||
|
"service.state",
|
||||||
|
"startup.readiness",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
self.assertTrue(all(check.status == "ok" for check in report.checks))
|
||||||
|
|
||||||
|
def test_run_self_check_missing_model_warns_without_downloading(self):
|
||||||
|
cfg = Config()
|
||||||
|
model_path = Path("/tmp/model.gguf")
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
config_path = Path(td) / "config.json"
|
||||||
|
config_path.write_text('{"config_version":1}\n', encoding="utf-8")
|
||||||
|
with patch.dict("os.environ", {"DISPLAY": ":0"}, clear=False), patch(
|
||||||
|
"diagnostics.load_existing", return_value=cfg
|
||||||
|
), patch("diagnostics.list_input_devices", return_value=[{"index": 1, "name": "Mic"}]), patch(
|
||||||
|
"diagnostics.resolve_input_device", return_value=1
|
||||||
|
), patch(
|
||||||
|
"diagnostics.get_desktop_adapter", return_value=_FakeDesktop()
|
||||||
|
), patch(
|
||||||
|
"diagnostics._run_systemctl_user",
|
||||||
|
side_effect=_systemctl_side_effect(
|
||||||
|
_Result(returncode=0, stdout="running\n"),
|
||||||
|
_Result(returncode=0, stdout="/home/test/.config/systemd/user/aman.service\n"),
|
||||||
|
_Result(returncode=0, stdout="enabled\n"),
|
||||||
|
_Result(returncode=0, stdout="active\n"),
|
||||||
|
),
|
||||||
|
), patch(
|
||||||
|
"diagnostics.probe_managed_model",
|
||||||
|
return_value=SimpleNamespace(
|
||||||
|
status="missing",
|
||||||
|
path=model_path,
|
||||||
|
message=f"managed editor model is not cached at {model_path}",
|
||||||
|
),
|
||||||
|
) as probe_model, patch(
|
||||||
|
"diagnostics.MODEL_DIR", model_path.parent
|
||||||
|
), patch(
|
||||||
|
"diagnostics.os.access", return_value=True
|
||||||
|
), patch(
|
||||||
|
"diagnostics._load_llama_bindings", return_value=(object(), object())
|
||||||
|
), patch.dict(
|
||||||
|
"sys.modules", {"faster_whisper": SimpleNamespace(WhisperModel=object())}
|
||||||
|
):
|
||||||
|
report = run_self_check(str(config_path))
|
||||||
|
|
||||||
|
self.assertEqual(report.status, "warn")
|
||||||
|
results = {check.id: check for check in report.checks}
|
||||||
|
self.assertEqual(results["model.cache"].status, "warn")
|
||||||
|
self.assertEqual(results["startup.readiness"].status, "warn")
|
||||||
|
self.assertIn("networked connection", results["model.cache"].next_step)
|
||||||
|
probe_model.assert_called_once()
|
||||||
|
|
||||||
|
def test_run_diagnostics_alias_matches_doctor(self):
|
||||||
|
cfg = Config()
|
||||||
|
with tempfile.TemporaryDirectory() as td:
|
||||||
|
config_path = Path(td) / "config.json"
|
||||||
|
config_path.write_text('{"config_version":1}\n', encoding="utf-8")
|
||||||
|
with patch.dict("os.environ", {"DISPLAY": ":0"}, clear=False), patch(
|
||||||
|
"diagnostics.load_existing", return_value=cfg
|
||||||
|
), patch("diagnostics.list_input_devices", return_value=[{"index": 1, "name": "Mic"}]), patch(
|
||||||
|
"diagnostics.resolve_input_device", return_value=1
|
||||||
|
), patch(
|
||||||
|
"diagnostics.get_desktop_adapter", return_value=_FakeDesktop()
|
||||||
|
), patch(
|
||||||
|
"diagnostics._run_systemctl_user",
|
||||||
|
return_value=_Result(returncode=0, stdout="running\n"),
|
||||||
|
):
|
||||||
|
report = run_diagnostics(str(config_path))
|
||||||
|
|
||||||
|
self.assertEqual(report.status, "ok")
|
||||||
|
self.assertEqual(len(report.checks), 7)
|
||||||
|
|
||||||
|
def test_report_json_schema_includes_status_and_next_step(self):
|
||||||
report = DiagnosticReport(
|
report = DiagnosticReport(
|
||||||
checks=[
|
checks=[
|
||||||
DiagnosticCheck(id="config.load", ok=True, message="ok", hint=""),
|
DiagnosticCheck(id="config.load", status="warn", message="missing", next_step="open settings"),
|
||||||
DiagnosticCheck(id="model.cache", ok=False, message="nope", hint="fix"),
|
DiagnosticCheck(id="service.prereq", status="fail", message="broken", next_step="fix systemd"),
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
payload = json.loads(report.to_json())
|
payload = json.loads(report.to_json())
|
||||||
|
|
||||||
|
self.assertEqual(payload["status"], "fail")
|
||||||
self.assertFalse(payload["ok"])
|
self.assertFalse(payload["ok"])
|
||||||
self.assertEqual(payload["checks"][0]["id"], "config.load")
|
self.assertEqual(payload["checks"][0]["status"], "warn")
|
||||||
self.assertEqual(payload["checks"][1]["hint"], "fix")
|
self.assertEqual(payload["checks"][0]["next_step"], "open settings")
|
||||||
|
self.assertEqual(payload["checks"][1]["hint"], "fix systemd")
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue