Beat VM create wall time without changing VM semantics.
Generate a work-seed ext4 sidecar during image builds and rootfs rebuilds, then clone and resize that seed for each new VM instead of rebuilding /root from scratch. Plumb the new seed artifact through config, runtime metadata, store state, runtime-bundle defaults, doctor checks, and default-image reconciliation so older images still fall back cleanly.
Add a daemon TAP pool to keep idle bridge-attached devices warm, expose stage timing in lifecycle logs, add a create/SSH benchmark script plus Make target, and teach verify.sh that tap-pool-* devices are reusable capacity rather than cleanup leaks.
Validated with go test ./..., make build, ./verify.sh, and make bench-create ARGS="--runs 2".
Remind users when a VM is still running after hanger vm ssh exits instead of silently dropping them back to the host shell.\n\nAttach a Firecracker vsock device to each VM, persist the host vsock path/CID,\nadd a new guest-side banger-vsock-pingd responder to the runtime bundle and both\nimage-build paths, and expose a vm.ping RPC that the CLI and TUI call after SSH\nreturns. Doctor and start/build preflight now validate the helper plus\n/dev/vhost-vsock so the feature fails early and clearly.\n\nValidated with go mod tidy, bash -n customize.sh, git diff --check, make build,\nand GOCACHE=/tmp/banger-gocache go test ./... outside the sandbox because the\ndaemon tests need real Unix/UDP sockets. Rebuild the image/rootfs used for new\nVMs so the guest ping service is present.
Make host-integrated VM features fit a standard Go extension path instead of adding more one-off branches through vm.go. This is the enabling refactor for future work like shared mounts, not the /work feature itself.
Add a daemon capability pipeline plus a structured guest-config builder, then move the existing /root work-disk mount, built-in DNS, and NAT wiring onto those hooks. Generalize Firecracker drive config at the same time so later storage features can extend machine setup without another hardcoded path.
Add banger doctor on top of the shared readiness checks, update the docs to describe the new architecture, and cover the new seams with guest-config, capability, report, CLI, and full go test verification. Also verify make build and a real ./banger doctor run on the host.
New VMs should come up with tmux session persistence ready instead of requiring per-VM plugin setup, and rebuilt images should stop carrying stale Docker installer scraps.
Configure both image build paths to install TPM, tmux-resurrect, and tmux-continuum for root, manage a marked /root/.tmux.conf block with autosave enabled and restore left manual, and remove legacy get-docker helper files during provisioning.
Update the README and repo guidance to document the rebuilt-image behavior. Verified with bash -n customize.sh, GOCACHE=/tmp/banger-gocache go test ./internal/daemon -run TestBuildProvisionScriptInstallsDefaultTools, and GOCACHE=/tmp/banger-gocache make build.
Keep the user-facing docs aligned with the current Go control plane instead of the older one-VM-at-a-time and ambiguous rootfs rebuild flows.
Document concurrent multi-VM lifecycle and set commands, clarify that rebuilt images now include mise plus opencode, and spell out when make rootfs needs an explicit base image. Also update the repo guidelines so future changes keep those behaviors documented.
Reduce the control plane's dependency on helper scripts while keeping the hard Linux integration points in the approved shell-out layer.
Replace the bash-driven image build path with a native Go builder that clones and optionally resizes the rootfs, boots a temporary Firecracker VM, provisions the guest over SSH, installs packages and modules, and preserves the package-manifest sidecar.
Also replace a few small convenience shell-outs with Go helpers: read process stats from /proc, use os.Truncate for ext4 image growth, add file-clone and normalized-line helpers, drop the sh -c work-disk flattening path, and launch Firecracker via a direct sudo command.
Add tests for the new SSH/archive and system helpers, plus a policy test that keeps os/exec imports confined to cli/firecracker/system. Update the docs to describe customize.sh as a manual helper rather than the daemon's image-build backend.
Validated with go mod tidy, go test ./..., and make build.
Serve daemon-managed .vm names directly from bangerd on 127.0.0.1:42069 instead of shelling out to mapdns. This keeps DNS state tied to VM lifecycle and lets the daemon rebuild records from running VMs after startup or reconcile.
Add a small in-process authoritative DNS server, register and remove records from the VM start/stop/delete paths, and show the listener in daemon status. Remove the mapdns config and preflight surface, stop helper-flow DNS publishing in customize.sh and interactive.sh, drop dns.sh from the runtime bundle, and update docs/tests for the new local-resolver integration model.
Validated with GOCACHE=/tmp/banger-gocache go test ./..., GOCACHE=/tmp/banger-gocache make build, and bash -n customize.sh interactive.sh.
Remove the last shell-owned NAT surface by extracting the iptables logic into a shared Go package and using it from both bangerd and a hidden helper bridge in the CLI.
Route customize.sh and interactive.sh through banger internal nat up/down so the remaining shell helpers reuse the same rule logic, resolve the local banger binary explicitly, and tear NAT back down during cleanup.
Drop nat.sh from the runtime bundle and docs now that NAT is Go-managed everywhere, and keep coverage aligned with the new shared package and helper command.
Validation: go test ./..., bash -n customize.sh interactive.sh verify.sh, make build, and a live ./verify.sh --nat run that installed host rules, reached outbound network access, and cleaned them up successfully.
Stop presenting make runtime-bundle as a turnkey fresh-checkout bootstrap\nwhen the checked-in manifest is intentionally empty. The manifest comments,\nruntimebundle error messages, Make help, README, and AGENTS docs now all\ndescribe the same local-first flow: stage an archive, use a separate local\nmanifest copy with url/sha256, then bootstrap ./runtime from that manifest.\n\nKeep the existing package/fetch commands intact, and add a small runtimebundle\nregression test so the local-manifest guidance does not drift again.\n\nValidated with make help and GOCACHE=/tmp/banger-gocache go test\n./internal/runtimebundle.
VM start, image build, and network/setup failures were hard to diagnose because bangerd emitted almost no lifecycle logs and the Firecracker SDK logger was discarded. This adds a daemon-wide JSON logger with configurable log level so failures leave breadcrumbs instead of only side effects.
Log the main daemon and VM lifecycle stages, preserve raw Firecracker and image-build helper output in dedicated files, and include those log paths in daemon status and returned errors. Bridge SDK logrus output into the daemon logger at debug level so low-level Firecracker diagnostics are available without making normal info logs unreadable.
Validation: go test ./... and make build. Left unrelated worktree changes out of this commit, including internal/api/types.go, the deleted shell scripts, and my-rootfs.ext4.
Stop assuming one workstation layout for runtime artifacts, mapdns, and host tooling. The daemon and shell helpers now use portable mapdns configuration, and runtime bundles can carry bundle.json metadata for their default kernel, initrd, modules, rootfs, and helper paths.
Load bundle metadata through config with a legacy layout fallback, thread mapdns_bin/mapdns_data_file through the Go and shell paths, and add command-scoped preflight checks for VM start, NAT, image build, work-disk resize, and SSH so missing tools or artifacts fail with actionable errors.
Update the runtime-bundle manifest, docs, and tests to match the new model. Verified with go test ./..., make build, and bash -n customize.sh interactive.sh dns.sh make-rootfs.sh verify.sh.
Stop treating Firecracker, kernels, modules, and guest images as tracked source files. Source checkouts now resolve runtime assets from ./runtime, while installed binaries keep using ../lib/banger.
Add a small runtimebundle helper plus runtime-bundle.toml so make can bootstrap, package, and install a runtime bundle with checksum validation. Update the shell helpers and daemon path hints to fail clearly when the bundle is missing instead of assuming repo-root artifacts.
This removes the tracked runtime blobs from HEAD in favor of an ignored local runtime/ tree. Verified with go test ./..., make build, bash -n on the shell helpers, make -n install, and a temporary package/fetch smoke test. The manifest URL/SHA still need a published bundle before fresh clones can bootstrap, and history rewrite remains a separate rollout step.
Replace the shell-only user workflow with `banger` and `bangerd`: Cobra commands, XDG/SQLite-backed state, managed VM and image lifecycle, and a Bubble Tea TUI for browsing and operating VMs.\n\nKeep Firecracker orchestration behind the daemon so VM specs become persistent objects, and add repo entrypoints for building, installing, and documenting the new flow while still delegating rootfs customization to the existing shell tooling.\n\nHarden the control plane around real usage by reclaiming Firecracker API sockets for the user, restarting stale daemons after rebuilds, and returning the correct `vm.create` payload so the CLI and TUI creation flow work reliably.\n\nValidation: `go test ./...`, `make build`, and a host-side smoke test with `./banger vm create --name codex-smoke`.
Move the default guest package list into a repo manifest and record a hash beside built rootfs images so run/make-rootfs can warn when the docker-ready image is stale.
Switch the Firecracker launch path to a single sparse root overlay per VM instead of separate /home and /var disks, so many VMs can share the same base image while still installing packages under /var and working from /root.
Keep older images bootable by masking stale home.mount and var.mount units at boot, and scrub those obsolete fstab entries when customize.sh rebuilds an image. Verified with bash -n on the updated scripts; no live VM boot was run in this environment.
Make spawned VMs easier to use and restore from the host.
Add shared DNS and runtime helpers, publish <vm-name>.vm records through mapdns, and teach run/customize/interactive/restore to persist the metadata needed for SSH, DNS cleanup, and clean restores.
Seed per-VM /home and /var disks from the rootfs snapshot so package state is present on first boot, add an interactive customization entrypoint plus ssh.sh and human-friendly list output, and let stop/kill/rm operate on multiple VM identifiers.
Tear down stale TAP, dm, and loop state when VMs stop so restore can recreate them safely, and validate the updated scripts with bash -n plus targeted dry-run harnesses for teardown and restore paths.