daemon split (4/5): extract *VMService service
Phase 4 of the daemon god-struct refactor. VM lifecycle, create-op
registry, handle cache, disk provisioning, stats polling, ports
query, and the per-VM lock set all move off *Daemon onto *VMService.
Daemon keeps thin forwarders only for FindVM / TouchVM (dispatch
surface) and is otherwise out of VM lifecycle. Lazy-init via
d.vmSvc() mirrors the earlier services so test literals like
\`&Daemon{store: db, runner: r}\` still get a functional service
without spelling one out.
Three small cleanups along the way:
* preflight helpers (validateStartPrereqs / addBaseStartPrereqs
/ addBaseStartCommandPrereqs / validateWorkDiskResizePrereqs)
move with the VM methods that call them.
* cleanupRuntime / rebuildDNS move to *VMService, with
HostNetwork primitives (findFirecrackerPID, cleanupDMSnapshot,
killVMProcess, releaseTap, waitForExit, sendCtrlAltDel)
reached through s.net instead of the hostNet() facade.
* vsockAgentBinary becomes a package-level function so both
*Daemon (doctor) and *VMService (preflight) call one entry
point instead of each owning a forwarder method.
WorkspaceService's peer deps switch from eager method values to
closures — vmSvc() constructs VMService with WorkspaceService as a
peer, so resolving d.vmSvc().FindVM at construction time recursed
through workspaceSvc() → vmSvc(). Closures defer the lookup to call
time.
Pure code motion: build + unit tests green, lint clean. No RPC
surface or lock-ordering changes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
c0d456e734
commit
466a7c30c4
23 changed files with 655 additions and 463 deletions
|
|
@ -91,20 +91,36 @@ func (d *Daemon) workspaceSvc() *WorkspaceService {
|
|||
if d.ws != nil {
|
||||
return d.ws
|
||||
}
|
||||
// Peer seams capture d by closure instead of pointing to
|
||||
// d.vmSvc() / d.imageSvc() directly. vmSvc() constructs VMService
|
||||
// with WorkspaceService as a peer, so resolving the peer service
|
||||
// eagerly here would recurse. Closures defer the lookup to call
|
||||
// time, by which point the cycle is broken because d.vm / d.img
|
||||
// are already populated.
|
||||
d.ws = newWorkspaceService(workspaceServiceDeps{
|
||||
runner: d.runner,
|
||||
logger: d.logger,
|
||||
config: d.config,
|
||||
layout: d.layout,
|
||||
store: d.store,
|
||||
vmResolver: d.FindVM,
|
||||
aliveChecker: d.vmAlive,
|
||||
waitGuestSSH: d.waitForGuestSSH,
|
||||
dialGuest: d.dialGuest,
|
||||
imageResolver: d.FindImage,
|
||||
imageWorkSeed: d.imageSvc().refreshManagedWorkSeedFingerprint,
|
||||
withVMLockByRef: d.withVMLockByRef,
|
||||
beginOperation: d.beginOperation,
|
||||
runner: d.runner,
|
||||
logger: d.logger,
|
||||
config: d.config,
|
||||
layout: d.layout,
|
||||
store: d.store,
|
||||
vmResolver: func(ctx context.Context, idOrName string) (model.VMRecord, error) {
|
||||
return d.vmSvc().FindVM(ctx, idOrName)
|
||||
},
|
||||
aliveChecker: func(vm model.VMRecord) bool {
|
||||
return d.vmSvc().vmAlive(vm)
|
||||
},
|
||||
waitGuestSSH: d.waitForGuestSSH,
|
||||
dialGuest: d.dialGuest,
|
||||
imageResolver: func(ctx context.Context, idOrName string) (model.Image, error) {
|
||||
return d.FindImage(ctx, idOrName)
|
||||
},
|
||||
imageWorkSeed: func(ctx context.Context, image model.Image, fingerprint string) error {
|
||||
return d.imageSvc().refreshManagedWorkSeedFingerprint(ctx, image, fingerprint)
|
||||
},
|
||||
withVMLockByRef: func(ctx context.Context, idOrName string, fn func(model.VMRecord) (model.VMRecord, error)) (model.VMRecord, error) {
|
||||
return d.vmSvc().withVMLockByRef(ctx, idOrName, fn)
|
||||
},
|
||||
beginOperation: d.beginOperation,
|
||||
})
|
||||
return d.ws
|
||||
}
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue