daemon: extract StatsService sibling; shrink VMService's surface

Closes commit 3 of the god-service decomposition. VMService still
owned 45+ methods after the startVMLocked extraction and RPC table
landed in commits 1 and 2. Stats / ports / health / vsock-ping sit
in a corner of that surface that doesn't share any state with
lifecycle orchestration — nothing about "what's this VM's CPU
doing" belongs in the same service as Create/Start/Stop/Delete/Set.

New StatsService owns:

  - GetVMStats / getVMStatsLocked / collectStats (stats collection)
  - HealthVM / PingVM (vsock-agent health probe)
  - PortsVM + buildVMPorts + probeWebListener + probeHTTPScheme +
    dedupeVMPorts (listening-port enumeration)
  - pollStats (background ticker refresh)
  - stopStaleVMs (auto-stop sweep past config.AutoStopStaleAfter)

The three VMService touch-points stats genuinely needs — vmAlive,
vmHandles, the per-VM lock helpers, plus cleanupRuntime for the
stale-sweep tear-down — come in as function-typed closures, not a
*VMService pointer. StatsService has no back-reference to its
sibling. Mirrors the dependency-struct pattern WorkspaceService
already uses.

Wiring: d.stats is populated in wireServices AFTER d.vm (closures
must see a non-nil d.vm at call time). Dispatch table's four
entries (vm.stats / vm.health / vm.ping / vm.ports) now resolve
through d.stats. Background loop's pollStats / stopStaleVMs
tickers do the same. Dispatch surface from the RPC client's
perspective is byte-identical.

After this commit:

  - vm_stats.go and ports.go are deleted; their content (plus the
    stats-specific fields) lives in stats_service.go.
  - VMService loses 12 methods. It's still the biggest service
    (~30 methods, all lifecycle-supporting: handle cache, disk
    provisioning, preflight, create-ops registry, lock helpers,
    the lifecycle verbs themselves) but it's finally one coherent
    concern instead of five.

Tests:
  - TestWireServicesInstantiatesStatsService — pins that the
    wiring order puts d.stats non-nil + its five closures all
    populated. Prevents a silent background-loop regression.
  - All existing tests that called d.vm.HealthVM / d.vm.PingVM /
    d.vm.PortsVM / d.vm.collectStats were re-pointed at d.stats.

Smoke: all 21 scenarios green, including vm ports (exercises the
new PortsVM entry end-to-end) and the long-running workspace
scenarios (exercise the background stats poller implicitly).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Thales Maciel 2026-04-23 15:46:59 -03:00
parent 366e1560c9
commit 86a56fedb3
No known key found for this signature in database
GPG key ID: 33112E6833C34679
7 changed files with 480 additions and 337 deletions

View file

@ -34,10 +34,11 @@ type Daemon struct {
runner system.CommandRunner
logger *slog.Logger
net *HostNetwork
img *ImageService
ws *WorkspaceService
vm *VMService
net *HostNetwork
img *ImageService
ws *WorkspaceService
vm *VMService
stats *StatsService
closing chan struct{}
once sync.Once
@ -276,11 +277,11 @@ func (d *Daemon) backgroundLoop() {
case <-d.closing:
return
case <-statsTicker.C:
if err := d.vm.pollStats(context.Background()); err != nil && d.logger != nil {
if err := d.stats.pollStats(context.Background()); err != nil && d.logger != nil {
d.logger.Error("background stats poll failed", "error", err.Error())
}
case <-staleTicker.C:
if err := d.vm.stopStaleVMs(context.Background()); err != nil && d.logger != nil {
if err := d.stats.stopStaleVMs(context.Background()); err != nil && d.logger != nil {
d.logger.Error("background stale sweep failed", "error", err.Error())
}
d.vm.pruneVMCreateOperations(time.Now().Add(-10 * time.Minute))
@ -429,6 +430,31 @@ func wireServices(d *Daemon) {
vsockHostDevice: defaultVsockHostDevice,
})
}
if d.stats == nil {
// Closures capture d rather than d.vm directly, so they re-read
// d.vm at call time. Wire order (d.vm constructed above) makes
// the closures safe, but this pattern also protects against a
// future test that swaps d.vm after initial wire.
d.stats = newStatsService(statsServiceDeps{
runner: d.runner,
logger: d.logger,
config: d.config,
store: d.store,
net: d.net,
beginOperation: d.beginOperation,
vmAlive: func(vm model.VMRecord) bool { return d.vm.vmAlive(vm) },
vmHandles: func(id string) model.VMHandles { return d.vm.vmHandles(id) },
withVMLockByRef: func(ctx context.Context, idOrName string, fn func(model.VMRecord) (model.VMRecord, error)) (model.VMRecord, error) {
return d.vm.withVMLockByRef(ctx, idOrName, fn)
},
withVMLockByIDErr: func(ctx context.Context, id string, fn func(model.VMRecord) error) error {
return d.vm.withVMLockByIDErr(ctx, id, fn)
},
cleanupRuntime: func(ctx context.Context, vm model.VMRecord, preserve bool) error {
return d.vm.cleanupRuntime(ctx, vm, preserve)
},
})
}
if len(d.vmCaps) == 0 {
d.vmCaps = d.defaultCapabilities()
}