# banger Persistent Firecracker development VMs managed through a Go daemon, CLI, and TUI. ## Requirements - Linux host with KVM (`/dev/kvm` access) - Vsock support for post-SSH liveness reminders (`/dev/vhost-vsock`) - Core VM lifecycle: `sudo`, `ip`, `dmsetup`, `losetup`, `blockdev`, `truncate`, `pgrep`, `chown`, `chmod`, `kill` - Guest rootfs patching: `e2cp`, `e2rm`, `debugfs` - Guest work disk creation/resizing: `mkfs.ext4`, `e2fsck`, `resize2fs`, `mount`, `umount`, `cp` - SSH and logs: `ssh` - Optional NAT: `iptables`, `sysctl` - Image build: the bundled SSH key plus the tools above; `banger image build` no longer shells out through `customize.sh` `banger` validates these per command and returns actionable errors instead of assuming one workstation layout. ## Runtime Bundle Runtime artifacts are no longer tracked directly in Git. Source checkouts use a generated `./runtime/` bundle, while installed binaries use `$(prefix)/lib/banger`. The bundle contains: - `firecracker` - `banger-vsock-agent` for the guest-side vsock HTTP health agent and SSH reminder checks - `bundle.json` with the bundle's default kernel/initrd/modules/rootfs paths - a kernel, initrd, and modules tree referenced by `bundle.json` - `rootfs-docker.ext4` - `rootfs-docker.work-seed.ext4` when present, used to seed `/root` quickly on new VM creates - `rootfs.ext4` when present - `packages.apt` - `id_ed25519` - the helper scripts used by manual customization and installs Bootstrap a source checkout from a local or published runtime archive. The checked-in [`runtime-bundle.toml`](/home/thales/projects/personal/banger/runtime-bundle.toml) is a template and intentionally ships with empty `url` and `sha256`. If you need to create a local archive first, do that from a checkout or machine that already has a populated `./runtime/` tree: ```bash make runtime-package cp dist/banger-runtime.tar.gz /path/to/fresh-checkout/dist/ ``` In the fresh checkout: ```bash cp runtime-bundle.toml runtime-bundle.local.toml ``` Edit `runtime-bundle.local.toml` to point at the staged archive and checksum: ```toml url = "./dist/banger-runtime.tar.gz" sha256 = "" ``` Then bootstrap `./runtime/` with the local manifest copy: ```bash make runtime-bundle RUNTIME_MANIFEST=runtime-bundle.local.toml ``` `url` may be a relative path, absolute path, `file:///...` URL, or HTTP(S) URL. `make install` will not fetch artifacts for you. ## Build ```bash make build ``` Run `make build` after `./runtime/` has been bootstrapped. It also rebuilds the bundled `banger-vsock-agent` guest helper in `./runtime/`. Install into `~/.local/bin` by default, with the runtime bundle under `~/.local/lib/banger`: ```bash make install ``` After `make install`, the installed `banger` and `bangerd` do not need the repo checkout to keep working. ## Basic VM Workflow Create and boot a VM: ```bash banger vm create --name calm-otter --disk-size 16G ``` Check host/runtime readiness before creating VMs: ```bash banger doctor ``` List VMs: ```bash banger vm list ``` Inspect a VM: ```bash banger vm show calm-otter banger vm stats calm-otter ``` SSH into a running VM: ```bash banger vm ssh calm-otter ``` When the SSH session exits normally, `banger` checks the guest over vsock and reminds you if the VM is still running. Inspect host-reachable listening ports for one or more running VMs: ```bash banger vm ports calm-otter buildbox ``` Stop, restart, kill, or delete it: ```bash banger vm stop calm-otter banger vm start calm-otter banger vm restart calm-otter banger vm kill --signal TERM calm-otter banger vm delete calm-otter ``` Update stopped VM settings: ```bash banger vm set calm-otter --memory 2048 --vcpu 4 --disk-size 32G ``` Lifecycle and `set` actions also accept multiple VM refs and run them concurrently: ```bash banger vm stop calm-otter buildbox api-1 banger vm kill --signal KILL aa12bb34 cc56dd78 banger vm set --nat web-1 web-2 web-3 ``` Launch the TUI: ```bash banger tui ``` ## Daemon The CLI auto-starts `bangerd` when needed. Useful daemon commands: ```bash banger daemon status banger daemon socket banger daemon stop ``` `banger daemon status` prints the daemon PID, socket path, daemon log path, and the built-in DNS listener address. State lives under XDG directories: - config: `~/.config/banger` - state: `~/.local/state/banger` - cache: `~/.cache/banger` - runtime socket: `$XDG_RUNTIME_DIR/banger/bangerd.sock` Installed binaries resolve their runtime bundle from `../lib/banger` relative to the executable. Source-checkout binaries resolve it from `./runtime` next to the repo-built `./banger`. You can override either with `runtime_dir` in `~/.config/banger/config.toml` or `BANGER_RUNTIME_DIR`. Useful config keys: - `log_level` - `runtime_dir` - `tap_pool_size` - `firecracker_bin` - `namegen_path` - `customize_script` (manual helper compatibility; `banger image build` is Go-native) - `vsock_agent_path` - `default_rootfs` - `default_work_seed` - `default_base_rootfs` - `default_kernel` - `default_initrd` - `default_modules_dir` - `default_packages_file` Guest SSH access always uses the private key shipped in the resolved runtime bundle. `ssh_key_path` is no longer a supported override for `banger vm ssh`, VM start key injection, or daemon guest provisioning. ## Doctor `banger doctor` runs the same readiness checks the Go control plane uses for VM start, host-integrated features, and image builds. It reports runtime bundle state, core VM host tools, current feature readiness, and image-build prerequisites in a concise pass/warn/fail list. Use it when bringing up a new machine, after changing the runtime bundle, or before adding new host-integrated VM features. ## Logs - daemon lifecycle logs: `~/.local/state/banger/bangerd.log` - raw Firecracker output per VM: `~/.local/state/banger/vms//firecracker.log` - raw image-build helper output: `~/.local/state/banger/image-build/*.log` `bangerd.log` is structured JSON. Set `log_level` in `~/.config/banger/config.toml` or `BANGER_LOG_LEVEL` to one of `debug`, `info`, `warn`, or `error`. ## Images List images: ```bash banger image list ``` Build a managed image: ```bash banger image build --name docker-dev --docker ``` Rebuilt images install a pinned `mise` at `/usr/local/bin/mise`, activate it for bash login and interactive shells, install `opencode` through `mise`, configure `tmux-resurrect` plus `tmux-continuum` for `root` with periodic autosaves and manual-only restore by default, and bake in the `banger-vsock-agent` systemd service used by the post-SSH reminder path and guest health checks. They also emit a `work-seed.ext4` sidecar that lets new VMs clone a prepared `/root` work disk instead of rebuilding it from scratch on every create. Show or delete images: ```bash banger image show docker-dev banger image delete docker-dev ``` `banger` auto-registers the bundled `default_rootfs` image when it exists. If the bundle does not include a separate base `rootfs.ext4`, `image build` falls back to using `rootfs-docker.ext4` as its default base image. ## Networking And DNS Enable NAT when creating or updating a VM: ```bash banger vm create --name web --nat banger vm set web --nat banger vm set web --no-nat ``` NAT is applied by the Go control plane using host `iptables` rules derived from the VM's current guest IP and TAP device. The remaining shell helpers also route NAT changes through `banger` instead of a standalone shell NAT script. `bangerd` also serves a tiny authoritative DNS service on `127.0.0.1:42069` for daemon-managed VMs. Known `A` records resolve `.vm` to the VM's guest IPv4 address. Integrate your local resolver separately if you want transparent `.vm` lookups on the host. `banger vm ports` asks the guest-side `banger-vsock-agent` to run `ss`, then prints host-usable `.vm:port` endpoints plus the owning process/command. TCP listeners get a short best-effort HTTP probe; when the probe sees a real HTTP response, the command includes a clickable `http://.vm:port/` URL. Older images without `ss` may need rebuilding before `vm ports` works. ## Storage Model - VMs share a read-only base rootfs image. - Each VM gets its own sparse writable system overlay for `/`. - Each VM gets its own persistent ext4 work disk mounted at `/root`. - When an image has a `work-seed.ext4` sidecar, new VM creates clone that seed and only resize it when needed. Older images still work, but create more slowly because `/root` must be built from scratch. - The daemon can keep a small idle TAP pool warm in the background so VM create does not need to synchronously create a fresh TAP every time. `tap_pool_size` controls the pool depth. ## Architecture Notes The Go daemon is the primary control plane. VM host integrations such as the built-in `.vm` DNS service, NAT, and `/root` work-disk wiring now sit behind a capability pipeline in the daemon instead of being open-coded through the VM lifecycle. Guest boot-time files and mounts are rendered through a structured guest-config builder rather than ad hoc `fstab` string mutation. That split is intentional: future host-integrated features should plug into the daemon capability path and `banger doctor` checks first, with the remaining shell helpers treated as manual workflows rather than architecture drivers. - Stopping a VM preserves its overlay and work disk. ## Rebuilding The Repo Default Rootfs `packages.apt` controls the base apt packages baked into rebuilt images, including guest tools such as `ss` used by `banger vm ports`. To rebuild the source-checkout default image in `./runtime/rootfs-docker.ext4`: ```bash make rootfs ``` That rebuild also regenerates `./runtime/rootfs-docker.work-seed.ext4`, which the daemon uses to speed up future `vm create` calls. If your runtime bundle does not include `./runtime/rootfs.ext4`, pass an explicit base image instead: ```bash ./make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4 ``` If the package manifest changed and you want a fresh source-checkout image: ```bash rm -f ./runtime/rootfs-docker.ext4 ./runtime/rootfs-docker.ext4.packages.sha256 make rootfs ``` `make rootfs` expects a bootstrapped runtime bundle. If `./runtime/rootfs.ext4` is not available, pass an explicit `--base-rootfs` to `./make-rootfs.sh`. Existing VMs keep using their current image and disks; rebuilds only affect VMs created from the rebuilt image afterward. ## Experimental Void Rootfs There is also a separate, opt-in builder for an experimental Void Linux guest path: ```bash make rootfs-void ``` That writes: - `./runtime/rootfs-void.ext4` - `./runtime/rootfs-void.work-seed.ext4` This path is intentionally local-only and does not change the default Debian image flow. It reuses the current runtime bundle kernel, initrd, and modules, but builds a lean `x86_64-glibc` Void userspace with: - `bash` installed for interactive/admin use - `docker` plus `docker-compose` installed from Void packages - the `docker` runit service enabled, with Docker netfilter/forwarding kernel prep - `openssh` enabled under runit - the bundled `banger-vsock-agent` health agent enabled under runit - `root` normalized to `/bin/bash` while keeping `/bin/sh` as the distro's system shell - a generated `/root` work-seed for fast creates It does not install the Debian-oriented extras from rebuilt default images: - no `mise` - no `opencode` - no tmux plugin defaults The builder fetches official static XBPS tools and packages from the Void mirror during the build. It currently supports only `x86_64-glibc`. The package set comes from [`packages.void`](/home/thales/projects/personal/banger/packages.void). You can override the mirror, size, or output path directly: ```bash ./make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G ``` The fastest local iteration loop does not require changing your default image config at all: ```bash make rootfs-void make void-register ./banger vm create --image void-exp --name void-dev ./banger vm ssh void-dev ``` Rebuild the Void rootfs and recreate existing `void-exp` VMs after changing the package set or guest provisioning; restart alone will not update the image contents or `/root` work-seed. There is also a smoke path for the experimental image: ```bash make verify-void ``` `make void-register` uses the unmanaged image registration path to create or update a `void-exp` image record in place, so repeated rebuilds do not require editing `~/.config/banger/config.toml`. There is also a one-step helper target: ```bash make void-vm VOID_VM_NAME=void-a ``` If you really want the Void image to become your default for `vm create` without `--image`, use the checked-in override template at [`examples/void-exp.config.toml`](/home/thales/projects/personal/banger/examples/void-exp.config.toml) and merge its four settings into `~/.config/banger/config.toml`. `banger image build` remains Debian-only in this pass. Do not point `default_base_rootfs` at the Void artifact yet. ## Registering Unmanaged Images You can also register any local rootfs as an unmanaged image record without changing global defaults: ```bash banger image register --name local-test --rootfs /abs/path/rootfs.ext4 ``` Optional paths let you point at an existing work seed, kernel, initrd, modules, and package manifest: ```bash banger image register \ --name void-exp \ --rootfs ./runtime/rootfs-void.ext4 \ --work-seed ./runtime/rootfs-void.work-seed.ext4 \ --packages ./packages.void ``` If an unmanaged image with the same name already exists, `image register` updates it in place so future `vm create --image ` calls pick up the new artifacts immediately. ## Maintaining The Runtime Bundle The checked-in [`runtime-bundle.toml`](/home/thales/projects/personal/banger/runtime-bundle.toml) is a template. Keep `bundle_metadata` accurate there, but use a separate local manifest copy when you need concrete `url` and `sha256` values for bootstrap testing or publication. Package a local `./runtime/` tree into an archive: ```bash make runtime-package ``` That writes `dist/banger-runtime.tar.gz` and prints its SHA256 so you can update a local manifest copy before testing bootstrap changes or publishing the archive elsewhere. ## Benchmarking Create Time Benchmark the current host's `vm create` wall time plus first-SSH readiness: ```bash make bench-create ``` Pass options through `ARGS`, for example: ```bash make bench-create ARGS="--runs 3 --image docker-dev" ``` The benchmark prints JSON with: - `create_ms`: wall time for `banger vm create` - `ssh_ready_ms`: wall time from create start until `banger vm ssh -- true` succeeds ## Remaining Shell Helpers The runtime VM lifecycle is managed through `banger`. The remaining shell scripts are not the primary user interface: - `customize.sh`: manual reference flow for rootfs customization; `banger image build` is now Go-native, but the script still reads assets from `BANGER_RUNTIME_DIR` and stores transient state under `BANGER_STATE_DIR`/XDG state - `make-rootfs.sh`: convenience wrapper for rebuilding `./runtime/rootfs-docker.ext4` - `interactive.sh`: manual one-off rootfs customization over SSH - `packages.sh`: shell helper library - `verify.sh`: smoke test for the Go workflow (`./verify.sh --nat` adds NAT coverage)