Stop relying on ad hoc rootfs handling by adding image promotion, managed work-seed fingerprint metadata, and lazy self-healing for older managed images after the first create. Rebuild guest images with baked SSH access, a guest NIC bootstrap, and default opencode services, and add the staged Void kernel/initramfs/modules workflow so void-exp uses a matching Void boot stack. Replace the opaque blocking vm.create RPC with a begin/status flow that prints live stages in the CLI while still waiting for vsock health and opencode on guest port 4096. Validate with GOCACHE=/tmp/banger-gocache go test ./... and live void-exp create/delete smoke runs.
17 KiB
banger
Persistent Firecracker development VMs managed through a Go daemon and CLI.
Requirements
- Linux host with KVM (
/dev/kvmaccess) - Vsock support for post-SSH liveness reminders (
/dev/vhost-vsock) - Core VM lifecycle:
sudo,ip,dmsetup,losetup,blockdev,truncate,pgrep,chown,chmod,kill - Guest rootfs patching:
e2cp,e2rm,debugfs - Guest work disk creation/resizing:
mkfs.ext4,e2fsck,resize2fs,mount,umount,cp - SSH and logs:
ssh - Optional NAT:
iptables,sysctl - Image build: the bundled SSH key plus the tools above;
banger image buildno longer shells out throughcustomize.sh
banger validates these per command and returns actionable errors instead of
assuming one workstation layout.
Runtime Bundle
Runtime artifacts are no longer tracked directly in Git. Source checkouts use a
generated ./runtime/ bundle, while installed binaries use
$(prefix)/lib/banger.
The bundle contains:
firecrackerbanger-vsock-agentfor the guest-side vsock HTTP health agent and SSH reminder checksbundle.jsonwith the bundle's default kernel/initrd/modules/rootfs paths- a kernel, initrd, and modules tree referenced by
bundle.json rootfs-docker.ext4rootfs-docker.work-seed.ext4when present, used to seed/rootquickly on new VM createsrootfs.ext4when presentpackages.aptid_ed25519- the helper scripts used by manual customization and installs
Bootstrap a source checkout from a local or published runtime archive. The
checked-in runtime-bundle.toml
is a template and intentionally ships with empty url and sha256.
If you need to create a local archive first, do that from a checkout or machine
that already has a populated ./runtime/ tree:
make runtime-package
cp dist/banger-runtime.tar.gz /path/to/fresh-checkout/dist/
In the fresh checkout:
cp runtime-bundle.toml runtime-bundle.local.toml
Edit runtime-bundle.local.toml to point at the staged archive and checksum:
url = "./dist/banger-runtime.tar.gz"
sha256 = "<sha256 printed by make runtime-package>"
Then bootstrap ./runtime/ with the local manifest copy:
make runtime-bundle RUNTIME_MANIFEST=runtime-bundle.local.toml
url may be a relative path, absolute path, file:///... URL, or HTTP(S)
URL. make install will not fetch artifacts for you.
Build
make build
Run make build after ./runtime/ has been bootstrapped. It also rebuilds the
bundled banger-vsock-agent guest helper in ./runtime/.
Install into ~/.local/bin by default, with the runtime bundle under
~/.local/lib/banger:
make install
After make install, the installed banger and bangerd do not need the repo
checkout to keep working.
Basic VM Workflow
Create and boot a VM:
banger vm create --name calm-otter --disk-size 16G
Check host/runtime readiness before creating VMs:
banger doctor
List VMs:
banger vm list
Inspect a VM:
banger vm show calm-otter
banger vm stats calm-otter
SSH into a running VM:
banger vm ssh calm-otter
When the SSH session exits normally, banger checks the guest over vsock and
reminds you if the VM is still running.
Inspect host-reachable listening ports for a running VM:
banger vm ports calm-otter
Stop, restart, kill, or delete it:
banger vm stop calm-otter
banger vm start calm-otter
banger vm restart calm-otter
banger vm kill --signal TERM calm-otter
banger vm delete calm-otter
Update stopped VM settings:
banger vm set calm-otter --memory 2048 --vcpu 4 --disk-size 32G
Lifecycle and set actions also accept multiple VM refs and run them
concurrently:
banger vm stop calm-otter buildbox api-1
banger vm kill --signal KILL aa12bb34 cc56dd78
banger vm set --nat web-1 web-2 web-3
Daemon
The CLI auto-starts bangerd when needed.
Useful daemon commands:
banger daemon status
banger daemon socket
banger daemon stop
banger daemon status prints the daemon PID, socket path, daemon log path, and
the built-in DNS listener address.
State lives under XDG directories:
- config:
~/.config/banger - state:
~/.local/state/banger - cache:
~/.cache/banger - runtime socket:
$XDG_RUNTIME_DIR/banger/bangerd.sock
Installed binaries resolve their runtime bundle from ../lib/banger relative to
the executable. Source-checkout binaries resolve it from ./runtime next to the
repo-built ./banger. You can override either with runtime_dir in
~/.config/banger/config.toml or BANGER_RUNTIME_DIR.
Useful config keys:
log_levelruntime_dirtap_pool_sizefirecracker_binnamegen_pathcustomize_script(manual helper compatibility;banger image buildis Go-native)vsock_agent_pathdefault_rootfsdefault_work_seeddefault_base_rootfsdefault_kerneldefault_initrddefault_modules_dirdefault_packages_file
Guest SSH access always uses the private key shipped in the resolved runtime
bundle. ssh_key_path is no longer a supported override for banger vm ssh,
VM start key injection, or daemon guest provisioning.
Doctor
banger doctor runs the same readiness checks the Go control plane uses for VM
start, host-integrated features, and image builds. It reports runtime bundle
state, core VM host tools, current feature readiness, and image-build
prerequisites in a concise pass/warn/fail list.
Use it when bringing up a new machine, after changing the runtime bundle, or before adding new host-integrated VM features.
Logs
- daemon lifecycle logs:
~/.local/state/banger/bangerd.log - raw Firecracker output per VM:
~/.local/state/banger/vms/<vm-id>/firecracker.log - raw image-build helper output:
~/.local/state/banger/image-build/*.log
bangerd.log is structured JSON. Set log_level in
~/.config/banger/config.toml or BANGER_LOG_LEVEL to one of debug,
info, warn, or error.
Images
List images:
banger image list
Build a managed image:
banger image build --name docker-dev --docker
Rebuilt images install a pinned mise at /usr/local/bin/mise, activate it
for bash login and interactive shells, install opencode through mise,
expose /usr/local/bin/opencode, configure tmux-resurrect plus
tmux-continuum for root with periodic autosaves and manual-only restore by
default, start a host-reachable opencode serve service on guest TCP port
4096, and bake in the banger-vsock-agent systemd service used by the
post-SSH reminder path and guest health checks. They
also emit a work-seed.ext4 sidecar that lets new VMs clone a prepared /root
work disk instead of rebuilding it from scratch on every create.
Show or delete images:
banger image show docker-dev
banger image delete docker-dev
Promote an existing unmanaged image into a managed one:
banger image promote default
banger image promote void-exp
Promotion copies the image's rootfs and optional work-seed into the
daemon's managed image state directory and keeps the same image ID, so existing
VM references stay valid. The image's kernel, initrd, modules, and package
manifest paths stay pointed at their current locations.
banger auto-registers the bundled default_rootfs image when it exists. If
the bundle does not include a separate base rootfs.ext4, image build falls
back to using rootfs-docker.ext4 as its default base image.
Networking And DNS
Enable NAT when creating or updating a VM:
banger vm create --name web --nat
banger vm set web --nat
banger vm set web --no-nat
NAT is applied by the Go control plane using host iptables rules derived from
the VM's current guest IP and TAP device. The remaining shell helpers also
route NAT changes through banger instead of a standalone shell NAT script.
bangerd also serves a tiny authoritative DNS service on 127.0.0.1:42069
for daemon-managed VMs. Known A records resolve <vm-name>.vm to the VM's
guest IPv4 address. Integrate your local resolver separately if you want
transparent .vm lookups on the host.
banger vm ports asks the guest-side banger-vsock-agent to run ss, then
prints host-usable endpoints plus the owning process/command. TCP listeners get
short best-effort HTTP and HTTPS probes; detected web listeners are shown as
http or https, and the endpoint column becomes a clickable URL such as
https://<hostname>.vm:port/. Older images without ss may need rebuilding
before vm ports works.
Newly rebuilt images also start opencode serve by default on guest TCP port
4096, bound on guest interfaces so the host can reach it directly at the
guest IP or via the endpoint shown by banger vm ports.
Storage Model
- VMs share a read-only base rootfs image.
- Each VM gets its own sparse writable system overlay for
/. - Each VM gets its own persistent ext4 work disk mounted at
/root. - When an image has a
work-seed.ext4sidecar, new VM creates clone that seed and only resize it when needed. Older images still work, but create more slowly because/rootmust be built from scratch. - The daemon can keep a small idle TAP pool warm in the background so VM create
does not need to synchronously create a fresh TAP every time.
tap_pool_sizecontrols the pool depth.
Architecture Notes
The Go daemon is the primary control plane. VM host integrations such as the
built-in .vm DNS service, NAT, and /root work-disk wiring now sit behind a
capability pipeline in the daemon instead of being open-coded through the VM
lifecycle. Guest boot-time files and mounts are rendered through a structured
guest-config builder rather than ad hoc fstab string mutation.
That split is intentional: future host-integrated features should plug into the
daemon capability path and banger doctor checks first, with the remaining
shell helpers treated as manual workflows rather than architecture drivers.
- Stopping a VM preserves its overlay and work disk.
Rebuilding The Repo Default Rootfs
packages.apt controls the base apt packages baked into rebuilt images,
including guest tools such as ss used by banger vm ports.
To rebuild the source-checkout default image in ./runtime/rootfs-docker.ext4:
make rootfs
That rebuild also regenerates ./runtime/rootfs-docker.work-seed.ext4, which
the daemon uses to speed up future vm create calls, and bakes in the default
host-reachable opencode server service.
If your runtime bundle does not include ./runtime/rootfs.ext4, pass an
explicit base image instead:
./make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4
If the package manifest changed and you want a fresh source-checkout image:
rm -f ./runtime/rootfs-docker.ext4 ./runtime/rootfs-docker.ext4.packages.sha256
make rootfs
make rootfs expects a bootstrapped runtime bundle. If ./runtime/rootfs.ext4
is not available, pass an explicit --base-rootfs to ./make-rootfs.sh.
Existing VMs keep using their current image and disks; rebuilds only affect VMs
created from the rebuilt image afterward. Restarting an existing VM is not
enough to pick up guest provisioning changes such as the default opencode
server service.
Experimental Void Rootfs
There is also a separate, opt-in builder for an experimental Void Linux guest path:
make void-kernel
make rootfs-void
That writes:
./runtime/void-kernel/whenmake void-kernelis used./runtime/rootfs-void.ext4./runtime/rootfs-void.work-seed.ext4
This path is intentionally local-only and does not change the default Debian
image flow. make void-kernel stages an actual Void linux6.12 kernel package
under ./runtime/void-kernel/, including the raw vmlinuz, extracted
Firecracker vmlinux, a matching initramfs, the matching config, and the
matching modules tree. The initramfs is generated locally with dracut
against the downloaded Void sysroot so the kernel, initrd, and modules stay
aligned. make rootfs-void then prefers that staged modules tree when it exists;
otherwise it falls back to the runtime bundle modules. The rootfs builder
itself still builds a lean x86_64-glibc Void userspace with:
bashinstalled for interactive/admin use- pinned
miseinstalled at/usr/local/bin/mise, activated forrootbash shells opencodeinstalled throughmise, with/usr/local/bin/opencodeavailable by default- a guest network bootstrap that configures the VM NIC from the kernel
ip=boot arg - a host-reachable
opencode serverunit service enabled on guest TCP port4096 dockerplusdocker-composeinstalled from Void packages- the
dockerrunit service enabled, with Docker netfilter/forwarding kernel prep opensshenabled under runit- the bundled
banger-vsock-agenthealth agent enabled under runit rootnormalized to/bin/bashwhile keeping/bin/shas the distro's system shell- a generated
/rootwork-seed for fast creates
It still keeps some Debian-oriented extras out for now:
- no tmux plugin defaults
The builder fetches official static XBPS tools and packages from the Void
mirror during the build. The kernel fetcher and rootfs builder currently
support only x86_64.
The package set comes from packages.void.
You can override the mirror, size, output path, or kernel package directly:
./make-void-kernel.sh --kernel-package linux6.12
./make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G
The fastest local iteration loop does not require changing your default image config at all:
make void-kernel
make rootfs-void
make void-register
./banger vm create --image void-exp --name void-dev
./banger vm ssh void-dev
Rebuild the staged Void kernel or Void rootfs, then recreate existing
void-exp VMs after changing the package set, guest provisioning, or staged
kernel artifacts; restart alone will not update the image contents, kernel, or
/root work-seed.
There is also a smoke path for the experimental image:
make verify-void
make void-register uses the unmanaged image registration path to create or
update a void-exp image record in place, so repeated rebuilds do not require
editing ~/.config/banger/config.toml. It expects a complete staged Void
kernel set under ./runtime/void-kernel/ and points the experimental image at
the staged Void vmlinux, initramfs, and matching modules tree.
There is also a one-step helper target:
make void-vm VOID_VM_NAME=void-a
If you really want the Void image to become your default for vm create
without --image, use the checked-in override template at
examples/void-exp.config.toml
and merge its four settings into ~/.config/banger/config.toml.
banger image build remains Debian-only in this pass. Do not point
default_base_rootfs at the Void artifact yet.
Registering Unmanaged Images
You can also register any local rootfs as an unmanaged image record without changing global defaults:
banger image register --name local-test --rootfs /abs/path/rootfs.ext4
Optional paths let you point at an existing work seed, kernel, initrd, modules, and package manifest:
banger image register \
--name void-exp \
--rootfs ./runtime/rootfs-void.ext4 \
--work-seed ./runtime/rootfs-void.work-seed.ext4 \
--kernel ./runtime/void-kernel/boot/vmlinux-6.12.77_1 \
--initrd ./runtime/void-kernel/boot/initramfs-6.12.77_1.img \
--modules ./runtime/void-kernel/lib/modules/6.12.77_1 \
--packages ./packages.void
If an unmanaged image with the same name already exists, image register
updates it in place so future vm create --image <name> calls pick up the new
artifacts immediately.
Maintaining The Runtime Bundle
The checked-in runtime-bundle.toml
is a template. Keep bundle_metadata accurate there, but use a separate local
manifest copy when you need concrete url and sha256 values for bootstrap
testing or publication.
Package a local ./runtime/ tree into an archive:
make runtime-package
That writes dist/banger-runtime.tar.gz and prints its SHA256 so you can update
a local manifest copy before testing bootstrap changes or publishing the
archive elsewhere.
Benchmarking Create Time
Benchmark the current host's vm create wall time plus first-SSH readiness:
make bench-create
Pass options through ARGS, for example:
make bench-create ARGS="--runs 3 --image docker-dev"
The benchmark prints JSON with:
create_ms: wall time forbanger vm createssh_ready_ms: wall time from create start untilbanger vm ssh <vm> -- truesucceeds
Remaining Shell Helpers
The runtime VM lifecycle is managed through banger. The remaining shell scripts are not the primary user interface:
customize.sh: manual reference flow for rootfs customization;banger image buildis now Go-native, but the script still reads assets fromBANGER_RUNTIME_DIRand stores transient state underBANGER_STATE_DIR/XDG statemake-rootfs.sh: convenience wrapper for rebuilding./runtime/rootfs-docker.ext4interactive.sh: manual one-off rootfs customization over SSHpackages.sh: shell helper libraryverify.sh: smoke test for the Go workflow (./verify.sh --natadds NAT coverage)