One-command development sandboxes on Firecracker microVMs. https://git.thaloco.com/thaloco/banger/
Find a file
Thales Maciel 38d7eac430
Add tmux resurrect defaults to rebuilt images
New VMs should come up with tmux session persistence ready instead of requiring per-VM plugin setup, and rebuilt images should stop carrying stale Docker installer scraps.

Configure both image build paths to install TPM, tmux-resurrect, and tmux-continuum for root, manage a marked /root/.tmux.conf block with autosave enabled and restore left manual, and remove legacy get-docker helper files during provisioning.

Update the README and repo guidance to document the rebuilt-image behavior. Verified with bash -n customize.sh, GOCACHE=/tmp/banger-gocache go test ./internal/daemon -run TestBuildProvisionScriptInstallsDefaultTools, and GOCACHE=/tmp/banger-gocache make build.
2026-03-18 17:44:12 -03:00
cmd Switch to fetched runtime bundles 2026-03-16 15:05:10 -03:00
internal Add tmux resurrect defaults to rebuilt images 2026-03-18 17:44:12 -03:00
.gitignore Switch to fetched runtime bundles 2026-03-16 15:05:10 -03:00
AGENTS.md Add tmux resurrect defaults to rebuilt images 2026-03-18 17:44:12 -03:00
customize.sh Add tmux resurrect defaults to rebuilt images 2026-03-18 17:44:12 -03:00
firecracker-api.yaml Add runtime options and schema 2026-01-29 01:14:29 -03:00
go.mod Move avoidable daemon shell-outs into Go 2026-03-17 17:13:07 -03:00
go.sum Replace mapdns with daemon DNS 2026-03-17 15:49:35 -03:00
interactive.sh Replace mapdns with daemon DNS 2026-03-17 15:49:35 -03:00
make-rootfs.sh Switch to fetched runtime bundles 2026-03-16 15:05:10 -03:00
Makefile Replace mapdns with daemon DNS 2026-03-17 15:49:35 -03:00
packages.apt add make 2026-03-18 13:15:01 -03:00
packages.sh Streamline VM overlays and rootfs packages 2026-03-15 19:36:54 -03:00
README.md Add tmux resurrect defaults to rebuilt images 2026-03-18 17:44:12 -03:00
runtime-bundle.toml Replace mapdns with daemon DNS 2026-03-17 15:49:35 -03:00
verify.sh Fix VM lifecycle issues behind verify.sh 2026-03-17 14:43:09 -03:00

banger

Persistent Firecracker development VMs managed through a Go daemon, CLI, and TUI.

Requirements

  • Linux host with KVM (/dev/kvm access)
  • Core VM lifecycle: sudo, ip, dmsetup, losetup, blockdev, truncate, pgrep, chown, chmod, kill
  • Guest rootfs patching: e2cp, e2rm, debugfs
  • Guest work disk creation/resizing: mkfs.ext4, e2fsck, resize2fs, mount, umount, cp
  • SSH and logs: ssh
  • Optional NAT: iptables, sysctl
  • Image build: the bundled SSH key plus the tools above; banger image build no longer shells out through customize.sh

banger validates these per command and returns actionable errors instead of assuming one workstation layout.

Runtime Bundle

Runtime artifacts are no longer tracked directly in Git. Source checkouts use a generated ./runtime/ bundle, while installed binaries use $(prefix)/lib/banger.

The bundle contains:

  • firecracker
  • bundle.json with the bundle's default kernel/initrd/modules/rootfs paths
  • a kernel, initrd, and modules tree referenced by bundle.json
  • rootfs-docker.ext4
  • rootfs.ext4 when present
  • packages.apt
  • id_ed25519
  • the helper scripts used by manual customization and installs

Bootstrap a source checkout from a local or published runtime archive. The checked-in runtime-bundle.toml is a template and intentionally ships with empty url and sha256.

If you need to create a local archive first, do that from a checkout or machine that already has a populated ./runtime/ tree:

make runtime-package
cp dist/banger-runtime.tar.gz /path/to/fresh-checkout/dist/

In the fresh checkout:

cp runtime-bundle.toml runtime-bundle.local.toml

Edit runtime-bundle.local.toml to point at the staged archive and checksum:

url = "./dist/banger-runtime.tar.gz"
sha256 = "<sha256 printed by make runtime-package>"

Then bootstrap ./runtime/ with the local manifest copy:

make runtime-bundle RUNTIME_MANIFEST=runtime-bundle.local.toml

url may be a relative path, absolute path, file:///... URL, or HTTP(S) URL. make install will not fetch artifacts for you.

Build

make build

Run make build after ./runtime/ has been bootstrapped.

Install into ~/.local/bin by default, with the runtime bundle under ~/.local/lib/banger:

make install

After make install, the installed banger and bangerd do not need the repo checkout to keep working.

Basic VM Workflow

Create and boot a VM:

banger vm create --name calm-otter --disk-size 16G

List VMs:

banger vm list

Inspect a VM:

banger vm show calm-otter
banger vm stats calm-otter

SSH into a running VM:

banger vm ssh calm-otter

Stop, restart, kill, or delete it:

banger vm stop calm-otter
banger vm start calm-otter
banger vm restart calm-otter
banger vm kill --signal TERM calm-otter
banger vm delete calm-otter

Update stopped VM settings:

banger vm set calm-otter --memory 2048 --vcpu 4 --disk-size 32G

Lifecycle and set actions also accept multiple VM refs and run them concurrently:

banger vm stop calm-otter buildbox api-1
banger vm kill --signal KILL aa12bb34 cc56dd78
banger vm set --nat web-1 web-2 web-3

Launch the TUI:

banger tui

Daemon

The CLI auto-starts bangerd when needed.

Useful daemon commands:

banger daemon status
banger daemon socket
banger daemon stop

banger daemon status prints the daemon PID, socket path, daemon log path, and the built-in DNS listener address.

State lives under XDG directories:

  • config: ~/.config/banger
  • state: ~/.local/state/banger
  • cache: ~/.cache/banger
  • runtime socket: $XDG_RUNTIME_DIR/banger/bangerd.sock

Installed binaries resolve their runtime bundle from ../lib/banger relative to the executable. Source-checkout binaries resolve it from ./runtime next to the repo-built ./banger. You can override either with runtime_dir in ~/.config/banger/config.toml or BANGER_RUNTIME_DIR.

Useful config keys:

  • log_level
  • runtime_dir
  • firecracker_bin
  • ssh_key_path
  • namegen_path
  • customize_script (manual helper compatibility; banger image build is Go-native)
  • default_rootfs
  • default_base_rootfs
  • default_kernel
  • default_initrd
  • default_modules_dir
  • default_packages_file

Logs

  • daemon lifecycle logs: ~/.local/state/banger/bangerd.log
  • raw Firecracker output per VM: ~/.local/state/banger/vms/<vm-id>/firecracker.log
  • raw image-build helper output: ~/.local/state/banger/image-build/*.log

bangerd.log is structured JSON. Set log_level in ~/.config/banger/config.toml or BANGER_LOG_LEVEL to one of debug, info, warn, or error.

Images

List images:

banger image list

Build a managed image:

banger image build --name docker-dev --docker

Rebuilt images install a pinned mise at /usr/local/bin/mise, activate it for bash login and interactive shells, install opencode through mise, and configure tmux-resurrect plus tmux-continuum for root with periodic autosaves and manual-only restore by default.

Show or delete images:

banger image show docker-dev
banger image delete docker-dev

banger auto-registers the bundled default_rootfs image when it exists. If the bundle does not include a separate base rootfs.ext4, image build falls back to using rootfs-docker.ext4 as its default base image.

Networking And DNS

Enable NAT when creating or updating a VM:

banger vm create --name web --nat
banger vm set web --nat
banger vm set web --no-nat

NAT is applied by the Go control plane using host iptables rules derived from the VM's current guest IP and TAP device. The remaining shell helpers also route NAT changes through banger instead of a standalone shell NAT script.

bangerd also serves a tiny authoritative DNS service on 127.0.0.1:42069 for daemon-managed VMs. Known A records resolve <vm-name>.vm to the VM's guest IPv4 address. Integrate your local resolver separately if you want transparent .vm lookups on the host.

Storage Model

  • VMs share a read-only base rootfs image.
  • Each VM gets its own sparse writable system overlay for /.
  • Each VM gets its own persistent ext4 work disk mounted at /root.
  • Stopping a VM preserves its overlay and work disk.

Rebuilding The Repo Default Rootfs

packages.apt controls the base apt packages baked into rebuilt images.

To rebuild the source-checkout default image in ./runtime/rootfs-docker.ext4:

make rootfs

If your runtime bundle does not include ./runtime/rootfs.ext4, pass an explicit base image instead:

./make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4

If the package manifest changed and you want a fresh source-checkout image:

rm -f ./runtime/rootfs-docker.ext4 ./runtime/rootfs-docker.ext4.packages.sha256
make rootfs

make rootfs expects a bootstrapped runtime bundle. If ./runtime/rootfs.ext4 is not available, pass an explicit --base-rootfs to ./make-rootfs.sh. Existing VMs keep using their current image and disks; rebuilds only affect VMs created from the rebuilt image afterward.

Maintaining The Runtime Bundle

The checked-in runtime-bundle.toml is a template. Keep bundle_metadata accurate there, but use a separate local manifest copy when you need concrete url and sha256 values for bootstrap testing or publication.

Package a local ./runtime/ tree into an archive:

make runtime-package

That writes dist/banger-runtime.tar.gz and prints its SHA256 so you can update a local manifest copy before testing bootstrap changes or publishing the archive elsewhere.

Remaining Shell Helpers

The runtime VM lifecycle is managed through banger. The remaining shell scripts are not the primary user interface:

  • customize.sh: manual reference flow for rootfs customization; banger image build is now Go-native, but the script still reads assets from BANGER_RUNTIME_DIR and stores transient state under BANGER_STATE_DIR/XDG state
  • make-rootfs.sh: convenience wrapper for rebuilding ./runtime/rootfs-docker.ext4
  • interactive.sh: manual one-off rootfs customization over SSH
  • packages.sh: shell helper library
  • verify.sh: smoke test for the Go workflow (./verify.sh --nat adds NAT coverage)