Hard-cut banger away from source-checkout runtime bundles as an implicit source of\nimage and host defaults. Managed images now own their full boot set,\nimage build starts from an existing registered image, and daemon startup\nno longer synthesizes a default image from host paths.\n\nResolve Firecracker from PATH or firecracker_bin, make SSH keys config-owned\nwith an auto-managed XDG default, replace the external name generator and\npackage manifests with Go code, and keep the vsock helper as a companion\nbinary instead of a user-managed runtime asset.\n\nUpdate the manual scripts, web/CLI forms, config surface, and docs around\nthe new build/manual flow and explicit image registration semantics.\n\nValidation: GOCACHE=/tmp/banger-gocache go test ./..., bash -n scripts/*.sh,\nand make build. |
||
|---|---|---|
| cmd | ||
| docs/reference | ||
| examples | ||
| internal | ||
| scripts | ||
| .gitignore | ||
| AGENTS.md | ||
| go.mod | ||
| go.sum | ||
| Makefile | ||
| README.md | ||
banger
banger manages Firecracker development VMs with a local daemon, managed image artifacts, and a localhost web UI.
Requirements
- Linux with
/dev/kvm sudo- Firecracker installed on
PATH, orfirecracker_binset in config - The usual host tools checked by
./build/bin/banger doctor
banger now owns complete managed image sets. A managed image includes:
rootfs- optional
work-seed kernel- optional
initrd - optional
modules
There is no runtime bundle anymore.
Build
make build
This writes:
./build/bin/banger./build/bin/bangerd./build/bin/banger-vsock-agent
Install
make install
That installs:
bangerbangerd- the
banger-vsock-agentcompanion helper under../lib/banger/
Config
Config lives at ~/.config/banger/config.toml.
Supported keys:
log_levelweb_listen_addrfirecracker_binssh_key_pathdefault_image_nameauto_stop_stale_afterstats_poll_intervalmetrics_poll_intervalbridge_namebridge_ipcidrtap_pool_sizedefault_dns
If ssh_key_path is unset, banger creates and uses:
~/.config/banger/ssh/id_ed25519
default_image_name now only means “use this registered image when vm create omits --image”. The daemon does not auto-register images from host paths.
Core Workflow
Check the host:
./build/bin/banger doctor
Register an existing host-side image stack:
./build/bin/banger image register \
--name base \
--rootfs /abs/path/rootfs.ext4 \
--kernel /abs/path/vmlinux \
--initrd /abs/path/initrd.img \
--modules /abs/path/modules
Build a managed image from an existing registered image:
./build/bin/banger image build \
--name devbox \
--from-image base \
--docker
Promote an unmanaged image into daemon-owned managed artifacts:
./build/bin/banger image promote base
Create and use a VM:
./build/bin/banger vm create --image devbox --name testbox
./build/bin/banger vm ssh testbox
./build/bin/banger vm stop testbox
vm create stays synchronous by default, but on a TTY it now shows live progress until the VM is fully ready.
Web UI
bangerd serves a local web UI by default at:
http://127.0.0.1:7777
See the effective URL with:
./build/bin/banger daemon status
Disable it with:
web_listen_addr = ""
Guest Services
Provisioned images include:
banger-vsock-agent- guest networking bootstrap
miseopencode- a default guest
opencodeservice on0.0.0.0:4096
From the host:
./build/bin/banger vm ports testbox
opencode attach http://<guest-ip>:4096
Manual Helpers
The shell helpers are now explicit manual workflows under ./build/manual.
Rebuild a Debian-style manual rootfs:
make rootfs ARGS='--base-rootfs /abs/path/rootfs.ext4 --kernel /abs/path/vmlinux --initrd /abs/path/initrd.img --modules /abs/path/modules'
The output lands in:
./build/manual/rootfs-docker.ext4./build/manual/rootfs-docker.work-seed.ext4
Experimental Void Flow
Stage a Void kernel:
make void-kernel
Build the experimental Void rootfs:
make rootfs-void
Register it:
make void-register
That flow uses:
./build/manual/void-kernel/./build/manual/rootfs-void.ext4./build/manual/rootfs-void.work-seed.ext4
Notes
- Firecracker is resolved from
PATHby default. - Managed image delete removes the daemon-owned artifact dir.
- The companion vsock helper is internal to the install/build layout, not a user-configured runtime path.