Bring the experimental Void image closer to the default dev image path by installing pinned mise inside the rootfs build, using it to install opencode, and activating mise automatically for root bash sessions. Keep the change scoped to the Void builder rather than packages.void so the image still stays language-agnostic at the package-manifest level, then clean mise download/cache artifacts before sealing the rootfs and work-seed. Extend verify-void so the smoke path now proves mise and opencode are actually present in a fresh void-exp VM. Verified with bash -n make-rootfs-void.sh verify.sh, GOCACHE=/tmp/banger-gocache go test ./..., and make build. |
||
|---|---|---|
| cmd | ||
| examples | ||
| internal | ||
| scripts | ||
| .gitignore | ||
| AGENTS.md | ||
| customize.sh | ||
| firecracker-api.yaml | ||
| go.mod | ||
| go.sum | ||
| interactive.sh | ||
| make-rootfs-void.sh | ||
| make-rootfs.sh | ||
| Makefile | ||
| packages.apt | ||
| packages.sh | ||
| packages.void | ||
| README.md | ||
| runtime-bundle.toml | ||
| verify.sh | ||
banger
Persistent Firecracker development VMs managed through a Go daemon, CLI, and TUI.
Requirements
- Linux host with KVM (
/dev/kvmaccess) - Vsock support for post-SSH liveness reminders (
/dev/vhost-vsock) - Core VM lifecycle:
sudo,ip,dmsetup,losetup,blockdev,truncate,pgrep,chown,chmod,kill - Guest rootfs patching:
e2cp,e2rm,debugfs - Guest work disk creation/resizing:
mkfs.ext4,e2fsck,resize2fs,mount,umount,cp - SSH and logs:
ssh - Optional NAT:
iptables,sysctl - Image build: the bundled SSH key plus the tools above;
banger image buildno longer shells out throughcustomize.sh
banger validates these per command and returns actionable errors instead of
assuming one workstation layout.
Runtime Bundle
Runtime artifacts are no longer tracked directly in Git. Source checkouts use a
generated ./runtime/ bundle, while installed binaries use
$(prefix)/lib/banger.
The bundle contains:
firecrackerbanger-vsock-agentfor the guest-side vsock HTTP health agent and SSH reminder checksbundle.jsonwith the bundle's default kernel/initrd/modules/rootfs paths- a kernel, initrd, and modules tree referenced by
bundle.json rootfs-docker.ext4rootfs-docker.work-seed.ext4when present, used to seed/rootquickly on new VM createsrootfs.ext4when presentpackages.aptid_ed25519- the helper scripts used by manual customization and installs
Bootstrap a source checkout from a local or published runtime archive. The
checked-in runtime-bundle.toml
is a template and intentionally ships with empty url and sha256.
If you need to create a local archive first, do that from a checkout or machine
that already has a populated ./runtime/ tree:
make runtime-package
cp dist/banger-runtime.tar.gz /path/to/fresh-checkout/dist/
In the fresh checkout:
cp runtime-bundle.toml runtime-bundle.local.toml
Edit runtime-bundle.local.toml to point at the staged archive and checksum:
url = "./dist/banger-runtime.tar.gz"
sha256 = "<sha256 printed by make runtime-package>"
Then bootstrap ./runtime/ with the local manifest copy:
make runtime-bundle RUNTIME_MANIFEST=runtime-bundle.local.toml
url may be a relative path, absolute path, file:///... URL, or HTTP(S)
URL. make install will not fetch artifacts for you.
Build
make build
Run make build after ./runtime/ has been bootstrapped. It also rebuilds the
bundled banger-vsock-agent guest helper in ./runtime/.
Install into ~/.local/bin by default, with the runtime bundle under
~/.local/lib/banger:
make install
After make install, the installed banger and bangerd do not need the repo
checkout to keep working.
Basic VM Workflow
Create and boot a VM:
banger vm create --name calm-otter --disk-size 16G
Check host/runtime readiness before creating VMs:
banger doctor
List VMs:
banger vm list
Inspect a VM:
banger vm show calm-otter
banger vm stats calm-otter
SSH into a running VM:
banger vm ssh calm-otter
When the SSH session exits normally, banger checks the guest over vsock and
reminds you if the VM is still running.
Inspect host-reachable listening ports for a running VM:
banger vm ports calm-otter
Stop, restart, kill, or delete it:
banger vm stop calm-otter
banger vm start calm-otter
banger vm restart calm-otter
banger vm kill --signal TERM calm-otter
banger vm delete calm-otter
Update stopped VM settings:
banger vm set calm-otter --memory 2048 --vcpu 4 --disk-size 32G
Lifecycle and set actions also accept multiple VM refs and run them
concurrently:
banger vm stop calm-otter buildbox api-1
banger vm kill --signal KILL aa12bb34 cc56dd78
banger vm set --nat web-1 web-2 web-3
Launch the TUI:
banger tui
Daemon
The CLI auto-starts bangerd when needed.
Useful daemon commands:
banger daemon status
banger daemon socket
banger daemon stop
banger daemon status prints the daemon PID, socket path, daemon log path, and
the built-in DNS listener address.
State lives under XDG directories:
- config:
~/.config/banger - state:
~/.local/state/banger - cache:
~/.cache/banger - runtime socket:
$XDG_RUNTIME_DIR/banger/bangerd.sock
Installed binaries resolve their runtime bundle from ../lib/banger relative to
the executable. Source-checkout binaries resolve it from ./runtime next to the
repo-built ./banger. You can override either with runtime_dir in
~/.config/banger/config.toml or BANGER_RUNTIME_DIR.
Useful config keys:
log_levelruntime_dirtap_pool_sizefirecracker_binnamegen_pathcustomize_script(manual helper compatibility;banger image buildis Go-native)vsock_agent_pathdefault_rootfsdefault_work_seeddefault_base_rootfsdefault_kerneldefault_initrddefault_modules_dirdefault_packages_file
Guest SSH access always uses the private key shipped in the resolved runtime
bundle. ssh_key_path is no longer a supported override for banger vm ssh,
VM start key injection, or daemon guest provisioning.
Doctor
banger doctor runs the same readiness checks the Go control plane uses for VM
start, host-integrated features, and image builds. It reports runtime bundle
state, core VM host tools, current feature readiness, and image-build
prerequisites in a concise pass/warn/fail list.
Use it when bringing up a new machine, after changing the runtime bundle, or before adding new host-integrated VM features.
Logs
- daemon lifecycle logs:
~/.local/state/banger/bangerd.log - raw Firecracker output per VM:
~/.local/state/banger/vms/<vm-id>/firecracker.log - raw image-build helper output:
~/.local/state/banger/image-build/*.log
bangerd.log is structured JSON. Set log_level in
~/.config/banger/config.toml or BANGER_LOG_LEVEL to one of debug,
info, warn, or error.
Images
List images:
banger image list
Build a managed image:
banger image build --name docker-dev --docker
Rebuilt images install a pinned mise at /usr/local/bin/mise, activate it
for bash login and interactive shells, install opencode through mise,
configure tmux-resurrect plus tmux-continuum for root with periodic
autosaves and manual-only restore by default, and bake in the
banger-vsock-agent systemd service used by the post-SSH reminder path and
guest health checks. They
also emit a work-seed.ext4 sidecar that lets new VMs clone a prepared /root
work disk instead of rebuilding it from scratch on every create.
Show or delete images:
banger image show docker-dev
banger image delete docker-dev
banger auto-registers the bundled default_rootfs image when it exists. If
the bundle does not include a separate base rootfs.ext4, image build falls
back to using rootfs-docker.ext4 as its default base image.
Networking And DNS
Enable NAT when creating or updating a VM:
banger vm create --name web --nat
banger vm set web --nat
banger vm set web --no-nat
NAT is applied by the Go control plane using host iptables rules derived from
the VM's current guest IP and TAP device. The remaining shell helpers also
route NAT changes through banger instead of a standalone shell NAT script.
bangerd also serves a tiny authoritative DNS service on 127.0.0.1:42069
for daemon-managed VMs. Known A records resolve <vm-name>.vm to the VM's
guest IPv4 address. Integrate your local resolver separately if you want
transparent .vm lookups on the host.
banger vm ports asks the guest-side banger-vsock-agent to run ss, then
prints host-usable endpoints plus the owning process/command. TCP listeners get
short best-effort HTTP and HTTPS probes; detected web listeners are shown as
http or https, and the endpoint column becomes a clickable URL such as
https://<hostname>.vm:port/. Older images without ss may need rebuilding
before vm ports works.
Storage Model
- VMs share a read-only base rootfs image.
- Each VM gets its own sparse writable system overlay for
/. - Each VM gets its own persistent ext4 work disk mounted at
/root. - When an image has a
work-seed.ext4sidecar, new VM creates clone that seed and only resize it when needed. Older images still work, but create more slowly because/rootmust be built from scratch. - The daemon can keep a small idle TAP pool warm in the background so VM create
does not need to synchronously create a fresh TAP every time.
tap_pool_sizecontrols the pool depth.
Architecture Notes
The Go daemon is the primary control plane. VM host integrations such as the
built-in .vm DNS service, NAT, and /root work-disk wiring now sit behind a
capability pipeline in the daemon instead of being open-coded through the VM
lifecycle. Guest boot-time files and mounts are rendered through a structured
guest-config builder rather than ad hoc fstab string mutation.
That split is intentional: future host-integrated features should plug into the
daemon capability path and banger doctor checks first, with the remaining
shell helpers treated as manual workflows rather than architecture drivers.
- Stopping a VM preserves its overlay and work disk.
Rebuilding The Repo Default Rootfs
packages.apt controls the base apt packages baked into rebuilt images,
including guest tools such as ss used by banger vm ports.
To rebuild the source-checkout default image in ./runtime/rootfs-docker.ext4:
make rootfs
That rebuild also regenerates ./runtime/rootfs-docker.work-seed.ext4, which
the daemon uses to speed up future vm create calls.
If your runtime bundle does not include ./runtime/rootfs.ext4, pass an
explicit base image instead:
./make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4
If the package manifest changed and you want a fresh source-checkout image:
rm -f ./runtime/rootfs-docker.ext4 ./runtime/rootfs-docker.ext4.packages.sha256
make rootfs
make rootfs expects a bootstrapped runtime bundle. If ./runtime/rootfs.ext4
is not available, pass an explicit --base-rootfs to ./make-rootfs.sh.
Existing VMs keep using their current image and disks; rebuilds only affect VMs
created from the rebuilt image afterward.
Experimental Void Rootfs
There is also a separate, opt-in builder for an experimental Void Linux guest path:
make rootfs-void
That writes:
./runtime/rootfs-void.ext4./runtime/rootfs-void.work-seed.ext4
This path is intentionally local-only and does not change the default Debian
image flow. It reuses the current runtime bundle kernel, initrd, and modules,
but builds a lean x86_64-glibc Void userspace with:
bashinstalled for interactive/admin use- pinned
miseinstalled at/usr/local/bin/mise, activated forrootbash shells opencodeinstalled throughmise, with/usr/local/bin/opencodeavailable by defaultdockerplusdocker-composeinstalled from Void packages- the
dockerrunit service enabled, with Docker netfilter/forwarding kernel prep opensshenabled under runit- the bundled
banger-vsock-agenthealth agent enabled under runit rootnormalized to/bin/bashwhile keeping/bin/shas the distro's system shell- a generated
/rootwork-seed for fast creates
It still keeps some Debian-oriented extras out for now:
- no tmux plugin defaults
The builder fetches official static XBPS tools and packages from the Void
mirror during the build. It currently supports only x86_64-glibc.
The package set comes from packages.void.
You can override the mirror, size, or output path directly:
./make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G
The fastest local iteration loop does not require changing your default image config at all:
make rootfs-void
make void-register
./banger vm create --image void-exp --name void-dev
./banger vm ssh void-dev
Rebuild the Void rootfs and recreate existing void-exp VMs after changing the
package set or guest provisioning; restart alone will not update the image
contents or /root work-seed.
There is also a smoke path for the experimental image:
make verify-void
make void-register uses the unmanaged image registration path to create or
update a void-exp image record in place, so repeated rebuilds do not require
editing ~/.config/banger/config.toml.
There is also a one-step helper target:
make void-vm VOID_VM_NAME=void-a
If you really want the Void image to become your default for vm create
without --image, use the checked-in override template at
examples/void-exp.config.toml
and merge its four settings into ~/.config/banger/config.toml.
banger image build remains Debian-only in this pass. Do not point
default_base_rootfs at the Void artifact yet.
Registering Unmanaged Images
You can also register any local rootfs as an unmanaged image record without changing global defaults:
banger image register --name local-test --rootfs /abs/path/rootfs.ext4
Optional paths let you point at an existing work seed, kernel, initrd, modules, and package manifest:
banger image register \
--name void-exp \
--rootfs ./runtime/rootfs-void.ext4 \
--work-seed ./runtime/rootfs-void.work-seed.ext4 \
--packages ./packages.void
If an unmanaged image with the same name already exists, image register
updates it in place so future vm create --image <name> calls pick up the new
artifacts immediately.
Maintaining The Runtime Bundle
The checked-in runtime-bundle.toml
is a template. Keep bundle_metadata accurate there, but use a separate local
manifest copy when you need concrete url and sha256 values for bootstrap
testing or publication.
Package a local ./runtime/ tree into an archive:
make runtime-package
That writes dist/banger-runtime.tar.gz and prints its SHA256 so you can update
a local manifest copy before testing bootstrap changes or publishing the
archive elsewhere.
Benchmarking Create Time
Benchmark the current host's vm create wall time plus first-SSH readiness:
make bench-create
Pass options through ARGS, for example:
make bench-create ARGS="--runs 3 --image docker-dev"
The benchmark prints JSON with:
create_ms: wall time forbanger vm createssh_ready_ms: wall time from create start untilbanger vm ssh <vm> -- truesucceeds
Remaining Shell Helpers
The runtime VM lifecycle is managed through banger. The remaining shell scripts are not the primary user interface:
customize.sh: manual reference flow for rootfs customization;banger image buildis now Go-native, but the script still reads assets fromBANGER_RUNTIME_DIRand stores transient state underBANGER_STATE_DIR/XDG statemake-rootfs.sh: convenience wrapper for rebuilding./runtime/rootfs-docker.ext4interactive.sh: manual one-off rootfs customization over SSHpackages.sh: shell helper libraryverify.sh: smoke test for the Go workflow (./verify.sh --natadds NAT coverage)