Remove runtime-bundle image dependencies

Hard-cut banger away from source-checkout runtime bundles as an implicit source of\nimage and host defaults. Managed images now own their full boot set,\nimage build starts from an existing registered image, and daemon startup\nno longer synthesizes a default image from host paths.\n\nResolve Firecracker from PATH or firecracker_bin, make SSH keys config-owned\nwith an auto-managed XDG default, replace the external name generator and\npackage manifests with Go code, and keep the vsock helper as a companion\nbinary instead of a user-managed runtime asset.\n\nUpdate the manual scripts, web/CLI forms, config surface, and docs around\nthe new build/manual flow and explicit image registration semantics.\n\nValidation: GOCACHE=/tmp/banger-gocache go test ./..., bash -n scripts/*.sh,\nand make build.
This commit is contained in:
Thales Maciel 2026-03-21 18:34:53 -03:00
parent 01c7cb5e65
commit 572bf32424
No known key found for this signature in database
GPG key ID: 33112E6833C34679
44 changed files with 1194 additions and 3456 deletions

558
README.md
View file

@ -1,520 +1,196 @@
# banger
Persistent Firecracker development VMs managed through a Go daemon and CLI.
`banger` manages Firecracker development VMs with a local daemon, managed image artifacts, and a localhost web UI.
## Requirements
- Linux host with KVM (`/dev/kvm` access)
- Vsock support for post-SSH liveness reminders (`/dev/vhost-vsock`)
- Core VM lifecycle: `sudo`, `ip`, `dmsetup`, `losetup`, `blockdev`, `truncate`, `pgrep`, `chown`, `chmod`, `kill`
- Guest rootfs patching: `e2cp`, `e2rm`, `debugfs`
- Guest work disk creation/resizing: `mkfs.ext4`, `e2fsck`, `resize2fs`, `mount`, `umount`, `cp`
- SSH and logs: `ssh`
- Optional NAT: `iptables`, `sysctl`
- Image build: the bundled SSH key plus the tools above; `banger image build` no longer shells out through `customize.sh`
`banger` validates these per command and returns actionable errors instead of
assuming one workstation layout.
- Linux with `/dev/kvm`
- `sudo`
- Firecracker installed on `PATH`, or `firecracker_bin` set in config
- The usual host tools checked by `./build/bin/banger doctor`
## Runtime Bundle
Runtime artifacts are no longer tracked directly in Git. Source checkouts use a
generated `./build/runtime/` bundle, while installed binaries use
`$(prefix)/lib/banger`.
`banger` now owns complete managed image sets. A managed image includes:
The bundle contains:
- `firecracker`
- `banger-vsock-agent` for the guest-side vsock HTTP health agent and SSH reminder checks
- `bundle.json` with the bundle's default kernel/initrd/modules/rootfs paths
- a kernel, initrd, and modules tree referenced by `bundle.json`
- `rootfs-docker.ext4`
- `rootfs-docker.work-seed.ext4` when present, used to seed `/root` quickly on
new VM creates
- `rootfs.ext4` when present
- `packages.apt`
- `id_ed25519`
- the helper scripts used by manual customization and installs
- `rootfs`
- optional `work-seed`
- `kernel`
- optional `initrd`
- optional `modules`
Bootstrap a source checkout from a local or published runtime archive. The
checked-in [`config/runtime-bundle.toml`](/home/thales/projects/personal/banger/config/runtime-bundle.toml)
is a template and intentionally ships with empty `url` and `sha256`.
If you need to create a local archive first, do that from a checkout or machine
that already has a populated `./build/runtime/` tree:
```bash
make runtime-package
cp build/dist/banger-runtime.tar.gz /path/to/fresh-checkout/build/dist/
```
In the fresh checkout:
```bash
cp config/runtime-bundle.toml config/runtime-bundle.local.toml
```
Edit `config/runtime-bundle.local.toml` to point at the staged archive and checksum:
```toml
url = "./build/dist/banger-runtime.tar.gz"
sha256 = "<sha256 printed by make runtime-package>"
```
Then bootstrap `./build/runtime/` with the local manifest copy:
```bash
make runtime-bundle RUNTIME_MANIFEST=config/runtime-bundle.local.toml
```
`url` may be a relative path, absolute path, `file:///...` URL, or HTTP(S)
URL. `make install` will not fetch artifacts for you.
There is no runtime bundle anymore.
## Build
```bash
make build
```
Run `make build` after `./build/runtime/` has been bootstrapped. It writes
`./build/bin/banger`, `./build/bin/bangerd`, and refreshes the bundled
`banger-vsock-agent` guest helper in `./build/runtime/`.
This writes:
Older ignored root artifacts such as `./runtime/`, `./banger`, and `./bangerd`
are no longer the canonical source-checkout layout. Leave them alone if you
still need them, or remove them manually after migrating to `build/`.
- `./build/bin/banger`
- `./build/bin/bangerd`
- `./build/bin/banger-vsock-agent`
If you have confirmed your current images and runtime settings no longer point
at the old checkout-local paths, a one-time cleanup looks like:
```bash
rm -rf ./runtime ./banger ./bangerd
```
## Install
Install into `~/.local/bin` by default, with the runtime bundle under
`~/.local/lib/banger`:
```bash
make install
```
After `make install`, the installed `banger` and `bangerd` do not need the repo
checkout to keep working.
That installs:
## Basic VM Workflow
Create and boot a VM:
```bash
banger vm create --name calm-otter --disk-size 16G
```
- `banger`
- `bangerd`
- the `banger-vsock-agent` companion helper under `../lib/banger/`
`banger vm create` now waits for full guest readiness by default, including the
guest vsock agent and the default `opencode` service, and prints live progress
stages on TTY stderr while it waits.
## Config
Check host/runtime readiness before creating VMs:
```bash
banger doctor
```
Config lives at `~/.config/banger/config.toml`.
List VMs:
```bash
banger vm list
```
Supported keys:
Inspect a VM:
```bash
banger vm show calm-otter
banger vm stats calm-otter
```
SSH into a running VM:
```bash
banger vm ssh calm-otter
```
When the SSH session exits normally, `banger` checks the guest over vsock and
reminds you if the VM is still running.
Inspect host-reachable listening ports for a running VM:
```bash
banger vm ports calm-otter
```
Stop, restart, kill, or delete it:
```bash
banger vm stop calm-otter
banger vm start calm-otter
banger vm restart calm-otter
banger vm kill --signal TERM calm-otter
banger vm delete calm-otter
```
Update stopped VM settings:
```bash
banger vm set calm-otter --memory 2048 --vcpu 4 --disk-size 32G
```
Lifecycle and `set` actions also accept multiple VM refs and run them
concurrently:
```bash
banger vm stop calm-otter buildbox api-1
banger vm kill --signal KILL aa12bb34 cc56dd78
banger vm set --nat web-1 web-2 web-3
```
## Daemon
The CLI auto-starts `bangerd` when needed.
Useful daemon commands:
```bash
banger daemon status
banger daemon socket
banger daemon stop
```
`banger daemon status` prints the daemon PID, socket path, daemon log path, and
the built-in DNS listener address. The daemon also serves a local web UI on
`http://127.0.0.1:7777` by default, and `daemon status` prints that URL when it
is enabled.
Use the web UI for dashboard, VM lifecycle, image inventory, VM create
progress, ports/log inspection, and image build/register/promote/delete flows:
```text
http://127.0.0.1:7777
```
The image forms use a server-side host-path picker. They do not upload files
through the browser; they select absolute paths that already exist on the host.
Mutating actions in the UI require the same sudo readiness as the CLI-backed
workflow. If the page shows writes as disabled, run:
```bash
sudo -v
```
and refresh the page.
State lives under XDG directories:
- config: `~/.config/banger`
- state: `~/.local/state/banger`
- cache: `~/.cache/banger`
- runtime socket: `$XDG_RUNTIME_DIR/banger/bangerd.sock`
Installed binaries resolve their runtime bundle from `../lib/banger` relative to
the executable. Source-checkout binaries resolve it from `./build/runtime` next
to `./build/bin/banger`, and still fall back to a legacy `./runtime` checkout
bundle if that exists. You can override either with `runtime_dir` in
`~/.config/banger/config.toml` or `BANGER_RUNTIME_DIR`.
Useful config keys:
- `log_level`
- `runtime_dir`
- `web_listen_addr` (`""` disables the web UI)
- `tap_pool_size`
- `web_listen_addr`
- `firecracker_bin`
- `namegen_path`
- `customize_script` (manual helper compatibility; `banger image build` is Go-native)
- `vsock_agent_path`
- `default_rootfs`
- `default_work_seed`
- `default_base_rootfs`
- `default_kernel`
- `default_initrd`
- `default_modules_dir`
- `default_packages_file`
- `ssh_key_path`
- `default_image_name`
- `auto_stop_stale_after`
- `stats_poll_interval`
- `metrics_poll_interval`
- `bridge_name`
- `bridge_ip`
- `cidr`
- `tap_pool_size`
- `default_dns`
Guest SSH access always uses the private key shipped in the resolved runtime
bundle. `ssh_key_path` is no longer a supported override for `banger vm ssh`,
VM start key injection, or daemon guest provisioning.
If `ssh_key_path` is unset, banger creates and uses:
## Doctor
`banger doctor` runs the same readiness checks the Go control plane uses for VM
start, host-integrated features, and image builds. It reports runtime bundle
state, core VM host tools, current feature readiness, and image-build
prerequisites in a concise pass/warn/fail list.
- `~/.config/banger/ssh/id_ed25519`
Use it when bringing up a new machine, after changing the runtime bundle, or
before adding new host-integrated VM features.
`default_image_name` now only means “use this registered image when `vm create` omits `--image`”. The daemon does not auto-register images from host paths.
## Logs
- daemon lifecycle logs: `~/.local/state/banger/bangerd.log`
- raw Firecracker output per VM: `~/.local/state/banger/vms/<vm-id>/firecracker.log`
- raw image-build helper output: `~/.local/state/banger/image-build/*.log`
## Core Workflow
`bangerd.log` is structured JSON. Set `log_level` in
`~/.config/banger/config.toml` or `BANGER_LOG_LEVEL` to one of `debug`,
`info`, `warn`, or `error`.
Check the host:
## Images
List images:
```bash
banger image list
./build/bin/banger doctor
```
Build a managed image:
Register an existing host-side image stack:
```bash
banger image build --name docker-dev --docker
./build/bin/banger image register \
--name base \
--rootfs /abs/path/rootfs.ext4 \
--kernel /abs/path/vmlinux \
--initrd /abs/path/initrd.img \
--modules /abs/path/modules
```
The web UI exposes both managed image build and unmanaged image register forms.
Builds run through an async progress page; register, promote, and delete remain
direct form actions.
Build a managed image from an existing registered image:
Rebuilt images install a pinned `mise` at `/usr/local/bin/mise`, activate it
for bash login and interactive shells, install `opencode` through `mise`,
expose `/usr/local/bin/opencode`, configure `tmux-resurrect` plus
`tmux-continuum` for `root` with periodic autosaves and manual-only restore by
default, start a host-reachable `opencode serve` service on guest TCP port
`4096`, and bake in the `banger-vsock-agent` systemd service used by the
post-SSH reminder path and guest health checks. They
also emit a `work-seed.ext4` sidecar that lets new VMs clone a prepared `/root`
work disk instead of rebuilding it from scratch on every create.
Show or delete images:
```bash
banger image show docker-dev
banger image delete docker-dev
./build/bin/banger image build \
--name devbox \
--from-image base \
--docker
```
Promote an existing unmanaged image into a managed one:
Promote an unmanaged image into daemon-owned managed artifacts:
```bash
banger image promote default
banger image promote void-exp
./build/bin/banger image promote base
```
Promotion copies the image's `rootfs` and optional `work-seed` into the
daemon's managed image state directory and keeps the same image ID, so existing
VM references stay valid. The image's kernel, initrd, modules, and package
manifest paths stay pointed at their current locations.
Create and use a VM:
`banger` auto-registers the bundled `default_rootfs` image when it exists. If
the bundle does not include a separate base `rootfs.ext4`, `image build` falls
back to using `rootfs-docker.ext4` as its default base image.
## Networking And DNS
Enable NAT when creating or updating a VM:
```bash
banger vm create --name web --nat
banger vm set web --nat
banger vm set web --no-nat
./build/bin/banger vm create --image devbox --name testbox
./build/bin/banger vm ssh testbox
./build/bin/banger vm stop testbox
```
NAT is applied by the Go control plane using host `iptables` rules derived from
the VM's current guest IP and TAP device. The remaining shell helpers also
route NAT changes through `banger` instead of a standalone shell NAT script.
`vm create` stays synchronous by default, but on a TTY it now shows live progress until the VM is fully ready.
`bangerd` also serves a tiny authoritative DNS service on `127.0.0.1:42069`
for daemon-managed VMs. Known `A` records resolve `<vm-name>.vm` to the VM's
guest IPv4 address. Integrate your local resolver separately if you want
transparent `.vm` lookups on the host.
## Web UI
`banger vm ports` asks the guest-side `banger-vsock-agent` to run `ss`, then
prints host-usable endpoints plus the owning process/command. TCP listeners get
short best-effort HTTP and HTTPS probes; detected web listeners are shown as
`http` or `https`, and the endpoint column becomes a clickable URL such as
`https://<hostname>.vm:port/`. Older images without `ss` may need rebuilding
before `vm ports` works.
`bangerd` serves a local web UI by default at:
Newly rebuilt images also start `opencode serve` by default on guest TCP port
`4096`, bound on guest interfaces so the host can reach it directly at the
guest IP or via the endpoint shown by `banger vm ports`.
- `http://127.0.0.1:7777`
## Storage Model
- VMs share a read-only base rootfs image.
- Each VM gets its own sparse writable system overlay for `/`.
- Each VM gets its own persistent ext4 work disk mounted at `/root`.
- When an image has a `work-seed.ext4` sidecar, new VM creates clone that seed
and only resize it when needed.
- Older managed images without the seeded SSH metadata may take one slower
create to repair `/root` access and refresh their managed work-seed; later
creates use the fast path.
- Images without any `work-seed.ext4` still work, but create more slowly
because `/root` must be built from scratch.
- The daemon can keep a small idle TAP pool warm in the background so VM create
does not need to synchronously create a fresh TAP every time. `tap_pool_size`
controls the pool depth.
See the effective URL with:
## Architecture Notes
The Go daemon is the primary control plane. VM host integrations such as the
built-in `.vm` DNS service, NAT, and `/root` work-disk wiring now sit behind a
capability pipeline in the daemon instead of being open-coded through the VM
lifecycle. Guest boot-time files and mounts are rendered through a structured
guest-config builder rather than ad hoc `fstab` string mutation.
That split is intentional: future host-integrated features should plug into the
daemon capability path and `banger doctor` checks first, with the remaining
shell helpers treated as manual workflows rather than architecture drivers.
- Stopping a VM preserves its overlay and work disk.
## Rebuilding The Repo Default Rootfs
`config/packages.apt` controls the base apt packages baked into rebuilt images,
including guest tools such as `ss` used by `banger vm ports`.
To rebuild the source-checkout default image in `./build/runtime/rootfs-docker.ext4`:
```bash
make rootfs
./build/bin/banger daemon status
```
That rebuild also regenerates `./build/runtime/rootfs-docker.work-seed.ext4`, which
the daemon uses to speed up future `vm create` calls, and bakes in the default
host-reachable `opencode` server service.
Disable it with:
If your runtime bundle does not include `./build/runtime/rootfs.ext4`, pass an
explicit base image instead:
```bash
./scripts/make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4
```toml
web_listen_addr = ""
```
If the package manifest changed and you want a fresh source-checkout image:
## Guest Services
Provisioned images include:
- `banger-vsock-agent`
- guest networking bootstrap
- `mise`
- `opencode`
- a default guest `opencode` service on `0.0.0.0:4096`
From the host:
```bash
rm -f ./build/runtime/rootfs-docker.ext4 ./build/runtime/rootfs-docker.ext4.packages.sha256
make rootfs
./build/bin/banger vm ports testbox
opencode attach http://<guest-ip>:4096
```
`make rootfs` expects a bootstrapped runtime bundle. If `./build/runtime/rootfs.ext4`
is not available, pass an explicit `--base-rootfs` to `./scripts/make-rootfs.sh`.
Existing VMs keep using their current image and disks; rebuilds only affect VMs
created from the rebuilt image afterward. Restarting an existing VM is not
enough to pick up guest provisioning changes such as the default `opencode`
server service.
## Manual Helpers
The shell helpers are now explicit manual workflows under `./build/manual`.
Rebuild a Debian-style manual rootfs:
```bash
make rootfs ARGS='--base-rootfs /abs/path/rootfs.ext4 --kernel /abs/path/vmlinux --initrd /abs/path/initrd.img --modules /abs/path/modules'
```
The output lands in:
- `./build/manual/rootfs-docker.ext4`
- `./build/manual/rootfs-docker.work-seed.ext4`
## Experimental Void Flow
Stage a Void kernel:
## Experimental Void Rootfs
There is also a separate, opt-in builder for an experimental Void Linux guest
path:
```bash
make void-kernel
```
Build the experimental Void rootfs:
```bash
make rootfs-void
```
That writes:
- `./build/runtime/void-kernel/` when `make void-kernel` is used
- `./build/runtime/rootfs-void.ext4`
- `./build/runtime/rootfs-void.work-seed.ext4`
Register it:
This path is intentionally local-only and does not change the default Debian
image flow. `make void-kernel` stages an actual Void `linux6.12` kernel package
under `./build/runtime/void-kernel/`, including the raw `vmlinuz`, extracted
Firecracker `vmlinux`, a matching `initramfs`, the matching config, and the
matching modules tree. The initramfs is generated locally with `dracut`
against the downloaded Void sysroot so the kernel, initrd, and modules stay
aligned. `make rootfs-void` then prefers that staged modules tree when it exists;
otherwise it falls back to the runtime bundle modules. The rootfs builder
itself still builds a lean `x86_64-glibc` Void userspace with:
- `bash` installed for interactive/admin use
- pinned `mise` installed at `/usr/local/bin/mise`, activated for `root` bash shells
- `opencode` installed through `mise`, with `/usr/local/bin/opencode` available by default
- a guest network bootstrap that configures the VM NIC from the kernel `ip=` boot arg
- a host-reachable `opencode serve` runit service enabled on guest TCP port `4096`
- `docker` plus `docker-compose` installed from Void packages
- the `docker` runit service enabled, with Docker netfilter/forwarding kernel prep
- `openssh` enabled under runit
- the bundled `banger-vsock-agent` health agent enabled under runit
- `root` normalized to `/bin/bash` while keeping `/bin/sh` as the distro's system shell
- a generated `/root` work-seed for fast creates
It still keeps some Debian-oriented extras out for now:
- no tmux plugin defaults
The builder fetches official static XBPS tools and packages from the Void
mirror during the build. The kernel fetcher and rootfs builder currently
support only `x86_64`.
The package set comes from [`config/packages.void`](/home/thales/projects/personal/banger/config/packages.void).
You can override the mirror, size, output path, or kernel package directly:
```bash
./scripts/make-void-kernel.sh --kernel-package linux6.12
./scripts/make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G
```
The fastest local iteration loop does not require changing your default image
config at all:
```bash
make void-kernel
make rootfs-void
make void-register
./build/bin/banger vm create --image void-exp --name void-dev
./build/bin/banger vm ssh void-dev
```
Rebuild the staged Void kernel or Void rootfs, then recreate existing
`void-exp` VMs after changing the package set, guest provisioning, or staged
kernel artifacts; restart alone will not update the image contents, kernel, or
`/root` work-seed.
That flow uses:
There is also a smoke path for the experimental image:
```bash
make verify-void
```
- `./build/manual/void-kernel/`
- `./build/manual/rootfs-void.ext4`
- `./build/manual/rootfs-void.work-seed.ext4`
`make void-register` uses the unmanaged image registration path to create or
update a `void-exp` image record in place, so repeated rebuilds do not require
editing `~/.config/banger/config.toml`. It expects a complete staged Void
kernel set under `./build/runtime/void-kernel/` and points the experimental image at
the staged Void `vmlinux`, `initramfs`, and matching modules tree.
## Notes
There is also a one-step helper target:
```bash
make void-vm VOID_VM_NAME=void-a
```
If you really want the Void image to become your default for `vm create`
without `--image`, use the checked-in override template at
[`examples/void-exp.config.toml`](/home/thales/projects/personal/banger/examples/void-exp.config.toml)
and merge its four settings into `~/.config/banger/config.toml`.
`banger image build` remains Debian-only in this pass. Do not point
`default_base_rootfs` at the Void artifact yet.
## Registering Unmanaged Images
You can also register any local rootfs as an unmanaged image record without
changing global defaults:
```bash
banger image register --name local-test --rootfs /abs/path/rootfs.ext4
```
Optional paths let you point at an existing work seed, kernel, initrd, modules,
and package manifest:
```bash
banger image register \
--name void-exp \
--rootfs ./build/runtime/rootfs-void.ext4 \
--work-seed ./build/runtime/rootfs-void.work-seed.ext4 \
--kernel ./build/runtime/void-kernel/boot/vmlinux-6.12.77_1 \
--initrd ./build/runtime/void-kernel/boot/initramfs-6.12.77_1.img \
--modules ./build/runtime/void-kernel/lib/modules/6.12.77_1 \
--packages ./config/packages.void
```
If an unmanaged image with the same name already exists, `image register`
updates it in place so future `vm create --image <name>` calls pick up the new
artifacts immediately.
## Maintaining The Runtime Bundle
The checked-in [`config/runtime-bundle.toml`](/home/thales/projects/personal/banger/config/runtime-bundle.toml)
is a template. Keep `bundle_metadata` accurate there, but use a separate local
manifest copy when you need concrete `url` and `sha256` values for bootstrap
testing or publication.
Package a local `./build/runtime/` tree into an archive:
```bash
make runtime-package
```
That writes `build/dist/banger-runtime.tar.gz` and prints its SHA256 so you can update
a local manifest copy before testing bootstrap changes or publishing the
archive elsewhere.
## Benchmarking Create Time
Benchmark the current host's `vm create` wall time plus first-SSH readiness:
```bash
make bench-create
```
Pass options through `ARGS`, for example:
```bash
make bench-create ARGS="--runs 3 --image docker-dev"
```
The benchmark prints JSON with:
- `create_ms`: wall time for `banger vm create`, including full readiness
gating for the guest vsock agent and default `opencode` service
- `ssh_ready_ms`: wall time from create start until `banger vm ssh <vm> -- true`
succeeds
## Remaining Shell Helpers
The runtime VM lifecycle is managed through `banger`. The remaining shell scripts are not the primary user interface:
- `scripts/customize.sh`: manual reference flow for rootfs customization; `banger image build` is now Go-native, but the script still reads
assets from `BANGER_RUNTIME_DIR` and stores transient state under
`BANGER_STATE_DIR`/XDG state
- `scripts/make-rootfs.sh`: convenience wrapper for rebuilding `./build/runtime/rootfs-docker.ext4`
- `scripts/interactive.sh`: manual one-off rootfs customization over SSH
- `scripts/lib/packages.sh`: shell helper library
- `scripts/verify.sh`: smoke test for the Go workflow (`./scripts/verify.sh --nat` adds NAT coverage)
- Firecracker is resolved from `PATH` by default.
- Managed image delete removes the daemon-owned artifact dir.
- The companion vsock helper is internal to the install/build layout, not a user-configured runtime path.