daemon: surface previously-swallowed errors at warn

Three recovery-path errors were silently dropped:

- vm_lifecycle.go startVMLocked persisted the VMStateError record
  with `_ = s.store.UpsertVM(...)`. If the persist failed the user
  saw the original start error but operators had no way to find
  out the store had also drifted out of sync.
- vm_lifecycle.go deleteVMLocked killed the firecracker process
  with `_ = s.net.killVMProcess(...)`. cleanupRuntime tears it
  down regardless, so the explicit kill is best-effort, but a
  permission-denied / EPERM was still worth logging.
- capabilities.go cleanupPreparedCapabilities collected per-cap
  errors with errors.Join. Callers get the aggregated value but
  couldn't tell which capability failed when more than one did.

All three now log Warn before the original behaviour continues.
The aggregate return value, control flow, and user-visible error
strings are unchanged — this is purely a "less silence in the
journal" pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Thales Maciel 2026-04-26 22:30:51 -03:00
parent 71a332a6a1
commit fa4292756d
No known key found for this signature in database
GPG key ID: 33112E6833C34679
2 changed files with 28 additions and 3 deletions

View file

@ -144,7 +144,16 @@ func (d *Daemon) cleanupPreparedCapabilities(ctx context.Context, vm *model.VMRe
if !ok {
continue
}
err = joinErr(err, hook.Cleanup(ctx, *vm))
cleanupErr := hook.Cleanup(ctx, *vm)
if cleanupErr != nil && d.logger != nil {
// Log per-capability cleanup failures. The aggregate
// errors.Join return value is still the contract for
// callers, but a multi-failure cleanup hides which
// capability misbehaved unless we surface each one
// individually here.
d.logger.Warn("capability cleanup failed", append(vmLogAttrs(*vm), "capability", capabilities[index].Name(), "error", cleanupErr.Error())...)
}
err = joinErr(err, cleanupErr)
}
return err
}