# banger Persistent Firecracker development VMs managed through a Go daemon, CLI, and TUI. ## Requirements - Linux host with KVM (`/dev/kvm` access) - Vsock support for post-SSH liveness reminders (`/dev/vhost-vsock`) - Core VM lifecycle: `sudo`, `ip`, `dmsetup`, `losetup`, `blockdev`, `truncate`, `pgrep`, `chown`, `chmod`, `kill` - Guest rootfs patching: `e2cp`, `e2rm`, `debugfs` - Guest work disk creation/resizing: `mkfs.ext4`, `e2fsck`, `resize2fs`, `mount`, `umount`, `cp` - SSH and logs: `ssh` - Optional NAT: `iptables`, `sysctl` - Image build: the bundled SSH key plus the tools above; `banger image build` no longer shells out through `customize.sh` `banger` validates these per command and returns actionable errors instead of assuming one workstation layout. ## Runtime Bundle Runtime artifacts are no longer tracked directly in Git. Source checkouts use a generated `./runtime/` bundle, while installed binaries use `$(prefix)/lib/banger`. The bundle contains: - `firecracker` - `banger-vsock-pingd` for the guest-side SSH reminder responder - `bundle.json` with the bundle's default kernel/initrd/modules/rootfs paths - a kernel, initrd, and modules tree referenced by `bundle.json` - `rootfs-docker.ext4` - `rootfs-docker.work-seed.ext4` when present, used to seed `/root` quickly on new VM creates - `rootfs.ext4` when present - `packages.apt` - `id_ed25519` - the helper scripts used by manual customization and installs Bootstrap a source checkout from a local or published runtime archive. The checked-in [`runtime-bundle.toml`](/home/thales/projects/personal/banger/runtime-bundle.toml) is a template and intentionally ships with empty `url` and `sha256`. If you need to create a local archive first, do that from a checkout or machine that already has a populated `./runtime/` tree: ```bash make runtime-package cp dist/banger-runtime.tar.gz /path/to/fresh-checkout/dist/ ``` In the fresh checkout: ```bash cp runtime-bundle.toml runtime-bundle.local.toml ``` Edit `runtime-bundle.local.toml` to point at the staged archive and checksum: ```toml url = "./dist/banger-runtime.tar.gz" sha256 = "" ``` Then bootstrap `./runtime/` with the local manifest copy: ```bash make runtime-bundle RUNTIME_MANIFEST=runtime-bundle.local.toml ``` `url` may be a relative path, absolute path, `file:///...` URL, or HTTP(S) URL. `make install` will not fetch artifacts for you. ## Build ```bash make build ``` Run `make build` after `./runtime/` has been bootstrapped. It also rebuilds the bundled `banger-vsock-pingd` guest helper in `./runtime/`. Install into `~/.local/bin` by default, with the runtime bundle under `~/.local/lib/banger`: ```bash make install ``` After `make install`, the installed `banger` and `bangerd` do not need the repo checkout to keep working. ## Basic VM Workflow Create and boot a VM: ```bash banger vm create --name calm-otter --disk-size 16G ``` Check host/runtime readiness before creating VMs: ```bash banger doctor ``` List VMs: ```bash banger vm list ``` Inspect a VM: ```bash banger vm show calm-otter banger vm stats calm-otter ``` SSH into a running VM: ```bash banger vm ssh calm-otter ``` When the SSH session exits normally, `banger` checks the guest over vsock and reminds you if the VM is still running. Stop, restart, kill, or delete it: ```bash banger vm stop calm-otter banger vm start calm-otter banger vm restart calm-otter banger vm kill --signal TERM calm-otter banger vm delete calm-otter ``` Update stopped VM settings: ```bash banger vm set calm-otter --memory 2048 --vcpu 4 --disk-size 32G ``` Lifecycle and `set` actions also accept multiple VM refs and run them concurrently: ```bash banger vm stop calm-otter buildbox api-1 banger vm kill --signal KILL aa12bb34 cc56dd78 banger vm set --nat web-1 web-2 web-3 ``` Launch the TUI: ```bash banger tui ``` ## Daemon The CLI auto-starts `bangerd` when needed. Useful daemon commands: ```bash banger daemon status banger daemon socket banger daemon stop ``` `banger daemon status` prints the daemon PID, socket path, daemon log path, and the built-in DNS listener address. State lives under XDG directories: - config: `~/.config/banger` - state: `~/.local/state/banger` - cache: `~/.cache/banger` - runtime socket: `$XDG_RUNTIME_DIR/banger/bangerd.sock` Installed binaries resolve their runtime bundle from `../lib/banger` relative to the executable. Source-checkout binaries resolve it from `./runtime` next to the repo-built `./banger`. You can override either with `runtime_dir` in `~/.config/banger/config.toml` or `BANGER_RUNTIME_DIR`. Useful config keys: - `log_level` - `runtime_dir` - `tap_pool_size` - `firecracker_bin` - `ssh_key_path` - `namegen_path` - `customize_script` (manual helper compatibility; `banger image build` is Go-native) - `vsock_ping_helper_path` - `default_rootfs` - `default_work_seed` - `default_base_rootfs` - `default_kernel` - `default_initrd` - `default_modules_dir` - `default_packages_file` ## Doctor `banger doctor` runs the same readiness checks the Go control plane uses for VM start, host-integrated features, and image builds. It reports runtime bundle state, core VM host tools, current feature readiness, and image-build prerequisites in a concise pass/warn/fail list. Use it when bringing up a new machine, after changing the runtime bundle, or before adding new host-integrated VM features. ## Logs - daemon lifecycle logs: `~/.local/state/banger/bangerd.log` - raw Firecracker output per VM: `~/.local/state/banger/vms//firecracker.log` - raw image-build helper output: `~/.local/state/banger/image-build/*.log` `bangerd.log` is structured JSON. Set `log_level` in `~/.config/banger/config.toml` or `BANGER_LOG_LEVEL` to one of `debug`, `info`, `warn`, or `error`. ## Images List images: ```bash banger image list ``` Build a managed image: ```bash banger image build --name docker-dev --docker ``` Rebuilt images install a pinned `mise` at `/usr/local/bin/mise`, activate it for bash login and interactive shells, install `opencode` through `mise`, configure `tmux-resurrect` plus `tmux-continuum` for `root` with periodic autosaves and manual-only restore by default, and bake in the `banger-vsock-pingd` systemd service used by the post-SSH reminder path. They also emit a `work-seed.ext4` sidecar that lets new VMs clone a prepared `/root` work disk instead of rebuilding it from scratch on every create. Show or delete images: ```bash banger image show docker-dev banger image delete docker-dev ``` `banger` auto-registers the bundled `default_rootfs` image when it exists. If the bundle does not include a separate base `rootfs.ext4`, `image build` falls back to using `rootfs-docker.ext4` as its default base image. ## Networking And DNS Enable NAT when creating or updating a VM: ```bash banger vm create --name web --nat banger vm set web --nat banger vm set web --no-nat ``` NAT is applied by the Go control plane using host `iptables` rules derived from the VM's current guest IP and TAP device. The remaining shell helpers also route NAT changes through `banger` instead of a standalone shell NAT script. `bangerd` also serves a tiny authoritative DNS service on `127.0.0.1:42069` for daemon-managed VMs. Known `A` records resolve `.vm` to the VM's guest IPv4 address. Integrate your local resolver separately if you want transparent `.vm` lookups on the host. ## Storage Model - VMs share a read-only base rootfs image. - Each VM gets its own sparse writable system overlay for `/`. - Each VM gets its own persistent ext4 work disk mounted at `/root`. - When an image has a `work-seed.ext4` sidecar, new VM creates clone that seed and only resize it when needed. Older images still work, but create more slowly because `/root` must be built from scratch. - The daemon can keep a small idle TAP pool warm in the background so VM create does not need to synchronously create a fresh TAP every time. `tap_pool_size` controls the pool depth. ## Architecture Notes The Go daemon is the primary control plane. VM host integrations such as the built-in `.vm` DNS service, NAT, and `/root` work-disk wiring now sit behind a capability pipeline in the daemon instead of being open-coded through the VM lifecycle. Guest boot-time files and mounts are rendered through a structured guest-config builder rather than ad hoc `fstab` string mutation. That split is intentional: future host-integrated features should plug into the daemon capability path and `banger doctor` checks first, with the remaining shell helpers treated as manual workflows rather than architecture drivers. - Stopping a VM preserves its overlay and work disk. ## Rebuilding The Repo Default Rootfs `packages.apt` controls the base apt packages baked into rebuilt images. To rebuild the source-checkout default image in `./runtime/rootfs-docker.ext4`: ```bash make rootfs ``` That rebuild also regenerates `./runtime/rootfs-docker.work-seed.ext4`, which the daemon uses to speed up future `vm create` calls. If your runtime bundle does not include `./runtime/rootfs.ext4`, pass an explicit base image instead: ```bash ./make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4 ``` If the package manifest changed and you want a fresh source-checkout image: ```bash rm -f ./runtime/rootfs-docker.ext4 ./runtime/rootfs-docker.ext4.packages.sha256 make rootfs ``` `make rootfs` expects a bootstrapped runtime bundle. If `./runtime/rootfs.ext4` is not available, pass an explicit `--base-rootfs` to `./make-rootfs.sh`. Existing VMs keep using their current image and disks; rebuilds only affect VMs created from the rebuilt image afterward. ## Maintaining The Runtime Bundle The checked-in [`runtime-bundle.toml`](/home/thales/projects/personal/banger/runtime-bundle.toml) is a template. Keep `bundle_metadata` accurate there, but use a separate local manifest copy when you need concrete `url` and `sha256` values for bootstrap testing or publication. Package a local `./runtime/` tree into an archive: ```bash make runtime-package ``` That writes `dist/banger-runtime.tar.gz` and prints its SHA256 so you can update a local manifest copy before testing bootstrap changes or publishing the archive elsewhere. ## Benchmarking Create Time Benchmark the current host's `vm create` wall time plus first-SSH readiness: ```bash make bench-create ``` Pass options through `ARGS`, for example: ```bash make bench-create ARGS="--runs 3 --image docker-dev" ``` The benchmark prints JSON with: - `create_ms`: wall time for `banger vm create` - `ssh_ready_ms`: wall time from create start until `banger vm ssh -- true` succeeds ## Remaining Shell Helpers The runtime VM lifecycle is managed through `banger`. The remaining shell scripts are not the primary user interface: - `customize.sh`: manual reference flow for rootfs customization; `banger image build` is now Go-native, but the script still reads assets from `BANGER_RUNTIME_DIR` and stores transient state under `BANGER_STATE_DIR`/XDG state - `make-rootfs.sh`: convenience wrapper for rebuilding `./runtime/rootfs-docker.ext4` - `interactive.sh`: manual one-off rootfs customization over SSH - `packages.sh`: shell helper library - `verify.sh`: smoke test for the Go workflow (`./verify.sh --nat` adds NAT coverage)