Closes commit 3 of the god-service decomposition. VMService still
owned 45+ methods after the startVMLocked extraction and RPC table
landed in commits 1 and 2. Stats / ports / health / vsock-ping sit
in a corner of that surface that doesn't share any state with
lifecycle orchestration — nothing about "what's this VM's CPU
doing" belongs in the same service as Create/Start/Stop/Delete/Set.
New StatsService owns:
- GetVMStats / getVMStatsLocked / collectStats (stats collection)
- HealthVM / PingVM (vsock-agent health probe)
- PortsVM + buildVMPorts + probeWebListener + probeHTTPScheme +
dedupeVMPorts (listening-port enumeration)
- pollStats (background ticker refresh)
- stopStaleVMs (auto-stop sweep past config.AutoStopStaleAfter)
The three VMService touch-points stats genuinely needs — vmAlive,
vmHandles, the per-VM lock helpers, plus cleanupRuntime for the
stale-sweep tear-down — come in as function-typed closures, not a
*VMService pointer. StatsService has no back-reference to its
sibling. Mirrors the dependency-struct pattern WorkspaceService
already uses.
Wiring: d.stats is populated in wireServices AFTER d.vm (closures
must see a non-nil d.vm at call time). Dispatch table's four
entries (vm.stats / vm.health / vm.ping / vm.ports) now resolve
through d.stats. Background loop's pollStats / stopStaleVMs
tickers do the same. Dispatch surface from the RPC client's
perspective is byte-identical.
After this commit:
- vm_stats.go and ports.go are deleted; their content (plus the
stats-specific fields) lives in stats_service.go.
- VMService loses 12 methods. It's still the biggest service
(~30 methods, all lifecycle-supporting: handle cache, disk
provisioning, preflight, create-ops registry, lock helpers,
the lifecycle verbs themselves) but it's finally one coherent
concern instead of five.
Tests:
- TestWireServicesInstantiatesStatsService — pins that the
wiring order puts d.stats non-nil + its five closures all
populated. Prevents a silent background-loop regression.
- All existing tests that called d.vm.HealthVM / d.vm.PingVM /
d.vm.PortsVM / d.vm.collectStats were re-pointed at d.stats.
Smoke: all 21 scenarios green, including vm ports (exercises the
new PortsVM entry end-to-end) and the long-running workspace
scenarios (exercise the background stats poller implicitly).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
51 lines
1.5 KiB
Go
51 lines
1.5 KiB
Go
package daemon
|
|
|
|
import (
|
|
"testing"
|
|
|
|
"banger/internal/model"
|
|
"banger/internal/paths"
|
|
)
|
|
|
|
// TestWireServicesInstantiatesStatsService pins that wireServices
|
|
// leaves d.stats non-nil after construction. A wiring-order bug that
|
|
// left stats unset would silently break background stats polling and
|
|
// the vm.stats / vm.health / vm.ping / vm.ports RPC methods — none
|
|
// of those would nil-deref at cold boot because the daemon might
|
|
// not get a call for minutes, but the pollStats ticker would
|
|
// immediately panic on its first fire.
|
|
func TestWireServicesInstantiatesStatsService(t *testing.T) {
|
|
d := &Daemon{
|
|
runner: &permissiveRunner{},
|
|
config: model.DaemonConfig{BridgeIP: model.DefaultBridgeIP},
|
|
layout: paths.Layout{
|
|
StateDir: t.TempDir(),
|
|
ConfigDir: t.TempDir(),
|
|
RuntimeDir: t.TempDir(),
|
|
VMsDir: t.TempDir(),
|
|
},
|
|
}
|
|
wireServices(d)
|
|
|
|
if d.stats == nil {
|
|
t.Fatal("d.stats is nil after wireServices")
|
|
}
|
|
// Spot-check the three closures that back every stats method —
|
|
// a nil closure would be a less-obvious wiring regression than
|
|
// a nil service.
|
|
if d.stats.vmAlive == nil {
|
|
t.Fatal("d.stats.vmAlive closure is nil")
|
|
}
|
|
if d.stats.vmHandles == nil {
|
|
t.Fatal("d.stats.vmHandles closure is nil")
|
|
}
|
|
if d.stats.cleanupRuntime == nil {
|
|
t.Fatal("d.stats.cleanupRuntime closure is nil")
|
|
}
|
|
if d.stats.withVMLockByRef == nil {
|
|
t.Fatal("d.stats.withVMLockByRef closure is nil")
|
|
}
|
|
if d.stats.withVMLockByIDErr == nil {
|
|
t.Fatal("d.stats.withVMLockByIDErr closure is nil")
|
|
}
|
|
}
|