banger/README.md
Thales Maciel 2362d0ae39
Serve a local web UI from bangerd
Add a localhost-only web console so VM and image management no longer depends on the CLI for every inspection and lifecycle action.

Wire bangerd up to a configurable web listener, expose dashboard and async image-build state through the daemon, and serve CSRF-protected HTML pages with host-path picking, VM/image detail views, logs, ports, and progress polling for long-running operations.

Keep the browser path aligned with the existing sudo and host-owned artifact model: surface sudo readiness, print the web URL in daemon status, and document the new workflow. Polish the UI with resource usage cards, clearer clickable affordances, cancel paths, confirmation prompts, image-name links, and HTTP port links.

Validation: GOCACHE=/tmp/banger-gocache go test ./...
2026-03-21 16:47:47 -03:00

508 lines
19 KiB
Markdown

# banger
Persistent Firecracker development VMs managed through a Go daemon and CLI.
## Requirements
- Linux host with KVM (`/dev/kvm` access)
- Vsock support for post-SSH liveness reminders (`/dev/vhost-vsock`)
- Core VM lifecycle: `sudo`, `ip`, `dmsetup`, `losetup`, `blockdev`, `truncate`, `pgrep`, `chown`, `chmod`, `kill`
- Guest rootfs patching: `e2cp`, `e2rm`, `debugfs`
- Guest work disk creation/resizing: `mkfs.ext4`, `e2fsck`, `resize2fs`, `mount`, `umount`, `cp`
- SSH and logs: `ssh`
- Optional NAT: `iptables`, `sysctl`
- Image build: the bundled SSH key plus the tools above; `banger image build` no longer shells out through `customize.sh`
`banger` validates these per command and returns actionable errors instead of
assuming one workstation layout.
## Runtime Bundle
Runtime artifacts are no longer tracked directly in Git. Source checkouts use a
generated `./runtime/` bundle, while installed binaries use
`$(prefix)/lib/banger`.
The bundle contains:
- `firecracker`
- `banger-vsock-agent` for the guest-side vsock HTTP health agent and SSH reminder checks
- `bundle.json` with the bundle's default kernel/initrd/modules/rootfs paths
- a kernel, initrd, and modules tree referenced by `bundle.json`
- `rootfs-docker.ext4`
- `rootfs-docker.work-seed.ext4` when present, used to seed `/root` quickly on
new VM creates
- `rootfs.ext4` when present
- `packages.apt`
- `id_ed25519`
- the helper scripts used by manual customization and installs
Bootstrap a source checkout from a local or published runtime archive. The
checked-in [`runtime-bundle.toml`](/home/thales/projects/personal/banger/runtime-bundle.toml)
is a template and intentionally ships with empty `url` and `sha256`.
If you need to create a local archive first, do that from a checkout or machine
that already has a populated `./runtime/` tree:
```bash
make runtime-package
cp dist/banger-runtime.tar.gz /path/to/fresh-checkout/dist/
```
In the fresh checkout:
```bash
cp runtime-bundle.toml runtime-bundle.local.toml
```
Edit `runtime-bundle.local.toml` to point at the staged archive and checksum:
```toml
url = "./dist/banger-runtime.tar.gz"
sha256 = "<sha256 printed by make runtime-package>"
```
Then bootstrap `./runtime/` with the local manifest copy:
```bash
make runtime-bundle RUNTIME_MANIFEST=runtime-bundle.local.toml
```
`url` may be a relative path, absolute path, `file:///...` URL, or HTTP(S)
URL. `make install` will not fetch artifacts for you.
## Build
```bash
make build
```
Run `make build` after `./runtime/` has been bootstrapped. It also rebuilds the
bundled `banger-vsock-agent` guest helper in `./runtime/`.
Install into `~/.local/bin` by default, with the runtime bundle under
`~/.local/lib/banger`:
```bash
make install
```
After `make install`, the installed `banger` and `bangerd` do not need the repo
checkout to keep working.
## Basic VM Workflow
Create and boot a VM:
```bash
banger vm create --name calm-otter --disk-size 16G
```
`banger vm create` now waits for full guest readiness by default, including the
guest vsock agent and the default `opencode` service, and prints live progress
stages on TTY stderr while it waits.
Check host/runtime readiness before creating VMs:
```bash
banger doctor
```
List VMs:
```bash
banger vm list
```
Inspect a VM:
```bash
banger vm show calm-otter
banger vm stats calm-otter
```
SSH into a running VM:
```bash
banger vm ssh calm-otter
```
When the SSH session exits normally, `banger` checks the guest over vsock and
reminds you if the VM is still running.
Inspect host-reachable listening ports for a running VM:
```bash
banger vm ports calm-otter
```
Stop, restart, kill, or delete it:
```bash
banger vm stop calm-otter
banger vm start calm-otter
banger vm restart calm-otter
banger vm kill --signal TERM calm-otter
banger vm delete calm-otter
```
Update stopped VM settings:
```bash
banger vm set calm-otter --memory 2048 --vcpu 4 --disk-size 32G
```
Lifecycle and `set` actions also accept multiple VM refs and run them
concurrently:
```bash
banger vm stop calm-otter buildbox api-1
banger vm kill --signal KILL aa12bb34 cc56dd78
banger vm set --nat web-1 web-2 web-3
```
## Daemon
The CLI auto-starts `bangerd` when needed.
Useful daemon commands:
```bash
banger daemon status
banger daemon socket
banger daemon stop
```
`banger daemon status` prints the daemon PID, socket path, daemon log path, and
the built-in DNS listener address. The daemon also serves a local web UI on
`http://127.0.0.1:7777` by default, and `daemon status` prints that URL when it
is enabled.
Use the web UI for dashboard, VM lifecycle, image inventory, VM create
progress, ports/log inspection, and image build/register/promote/delete flows:
```text
http://127.0.0.1:7777
```
The image forms use a server-side host-path picker. They do not upload files
through the browser; they select absolute paths that already exist on the host.
Mutating actions in the UI require the same sudo readiness as the CLI-backed
workflow. If the page shows writes as disabled, run:
```bash
sudo -v
```
and refresh the page.
State lives under XDG directories:
- config: `~/.config/banger`
- state: `~/.local/state/banger`
- cache: `~/.cache/banger`
- runtime socket: `$XDG_RUNTIME_DIR/banger/bangerd.sock`
Installed binaries resolve their runtime bundle from `../lib/banger` relative to
the executable. Source-checkout binaries resolve it from `./runtime` next to the
repo-built `./banger`. You can override either with `runtime_dir` in
`~/.config/banger/config.toml` or `BANGER_RUNTIME_DIR`.
Useful config keys:
- `log_level`
- `runtime_dir`
- `web_listen_addr` (`""` disables the web UI)
- `tap_pool_size`
- `firecracker_bin`
- `namegen_path`
- `customize_script` (manual helper compatibility; `banger image build` is Go-native)
- `vsock_agent_path`
- `default_rootfs`
- `default_work_seed`
- `default_base_rootfs`
- `default_kernel`
- `default_initrd`
- `default_modules_dir`
- `default_packages_file`
Guest SSH access always uses the private key shipped in the resolved runtime
bundle. `ssh_key_path` is no longer a supported override for `banger vm ssh`,
VM start key injection, or daemon guest provisioning.
## Doctor
`banger doctor` runs the same readiness checks the Go control plane uses for VM
start, host-integrated features, and image builds. It reports runtime bundle
state, core VM host tools, current feature readiness, and image-build
prerequisites in a concise pass/warn/fail list.
Use it when bringing up a new machine, after changing the runtime bundle, or
before adding new host-integrated VM features.
## Logs
- daemon lifecycle logs: `~/.local/state/banger/bangerd.log`
- raw Firecracker output per VM: `~/.local/state/banger/vms/<vm-id>/firecracker.log`
- raw image-build helper output: `~/.local/state/banger/image-build/*.log`
`bangerd.log` is structured JSON. Set `log_level` in
`~/.config/banger/config.toml` or `BANGER_LOG_LEVEL` to one of `debug`,
`info`, `warn`, or `error`.
## Images
List images:
```bash
banger image list
```
Build a managed image:
```bash
banger image build --name docker-dev --docker
```
The web UI exposes both managed image build and unmanaged image register forms.
Builds run through an async progress page; register, promote, and delete remain
direct form actions.
Rebuilt images install a pinned `mise` at `/usr/local/bin/mise`, activate it
for bash login and interactive shells, install `opencode` through `mise`,
expose `/usr/local/bin/opencode`, configure `tmux-resurrect` plus
`tmux-continuum` for `root` with periodic autosaves and manual-only restore by
default, start a host-reachable `opencode serve` service on guest TCP port
`4096`, and bake in the `banger-vsock-agent` systemd service used by the
post-SSH reminder path and guest health checks. They
also emit a `work-seed.ext4` sidecar that lets new VMs clone a prepared `/root`
work disk instead of rebuilding it from scratch on every create.
Show or delete images:
```bash
banger image show docker-dev
banger image delete docker-dev
```
Promote an existing unmanaged image into a managed one:
```bash
banger image promote default
banger image promote void-exp
```
Promotion copies the image's `rootfs` and optional `work-seed` into the
daemon's managed image state directory and keeps the same image ID, so existing
VM references stay valid. The image's kernel, initrd, modules, and package
manifest paths stay pointed at their current locations.
`banger` auto-registers the bundled `default_rootfs` image when it exists. If
the bundle does not include a separate base `rootfs.ext4`, `image build` falls
back to using `rootfs-docker.ext4` as its default base image.
## Networking And DNS
Enable NAT when creating or updating a VM:
```bash
banger vm create --name web --nat
banger vm set web --nat
banger vm set web --no-nat
```
NAT is applied by the Go control plane using host `iptables` rules derived from
the VM's current guest IP and TAP device. The remaining shell helpers also
route NAT changes through `banger` instead of a standalone shell NAT script.
`bangerd` also serves a tiny authoritative DNS service on `127.0.0.1:42069`
for daemon-managed VMs. Known `A` records resolve `<vm-name>.vm` to the VM's
guest IPv4 address. Integrate your local resolver separately if you want
transparent `.vm` lookups on the host.
`banger vm ports` asks the guest-side `banger-vsock-agent` to run `ss`, then
prints host-usable endpoints plus the owning process/command. TCP listeners get
short best-effort HTTP and HTTPS probes; detected web listeners are shown as
`http` or `https`, and the endpoint column becomes a clickable URL such as
`https://<hostname>.vm:port/`. Older images without `ss` may need rebuilding
before `vm ports` works.
Newly rebuilt images also start `opencode serve` by default on guest TCP port
`4096`, bound on guest interfaces so the host can reach it directly at the
guest IP or via the endpoint shown by `banger vm ports`.
## Storage Model
- VMs share a read-only base rootfs image.
- Each VM gets its own sparse writable system overlay for `/`.
- Each VM gets its own persistent ext4 work disk mounted at `/root`.
- When an image has a `work-seed.ext4` sidecar, new VM creates clone that seed
and only resize it when needed.
- Older managed images without the seeded SSH metadata may take one slower
create to repair `/root` access and refresh their managed work-seed; later
creates use the fast path.
- Images without any `work-seed.ext4` still work, but create more slowly
because `/root` must be built from scratch.
- The daemon can keep a small idle TAP pool warm in the background so VM create
does not need to synchronously create a fresh TAP every time. `tap_pool_size`
controls the pool depth.
## Architecture Notes
The Go daemon is the primary control plane. VM host integrations such as the
built-in `.vm` DNS service, NAT, and `/root` work-disk wiring now sit behind a
capability pipeline in the daemon instead of being open-coded through the VM
lifecycle. Guest boot-time files and mounts are rendered through a structured
guest-config builder rather than ad hoc `fstab` string mutation.
That split is intentional: future host-integrated features should plug into the
daemon capability path and `banger doctor` checks first, with the remaining
shell helpers treated as manual workflows rather than architecture drivers.
- Stopping a VM preserves its overlay and work disk.
## Rebuilding The Repo Default Rootfs
`packages.apt` controls the base apt packages baked into rebuilt images,
including guest tools such as `ss` used by `banger vm ports`.
To rebuild the source-checkout default image in `./runtime/rootfs-docker.ext4`:
```bash
make rootfs
```
That rebuild also regenerates `./runtime/rootfs-docker.work-seed.ext4`, which
the daemon uses to speed up future `vm create` calls, and bakes in the default
host-reachable `opencode` server service.
If your runtime bundle does not include `./runtime/rootfs.ext4`, pass an
explicit base image instead:
```bash
./make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4
```
If the package manifest changed and you want a fresh source-checkout image:
```bash
rm -f ./runtime/rootfs-docker.ext4 ./runtime/rootfs-docker.ext4.packages.sha256
make rootfs
```
`make rootfs` expects a bootstrapped runtime bundle. If `./runtime/rootfs.ext4`
is not available, pass an explicit `--base-rootfs` to `./make-rootfs.sh`.
Existing VMs keep using their current image and disks; rebuilds only affect VMs
created from the rebuilt image afterward. Restarting an existing VM is not
enough to pick up guest provisioning changes such as the default `opencode`
server service.
## Experimental Void Rootfs
There is also a separate, opt-in builder for an experimental Void Linux guest
path:
```bash
make void-kernel
make rootfs-void
```
That writes:
- `./runtime/void-kernel/` when `make void-kernel` is used
- `./runtime/rootfs-void.ext4`
- `./runtime/rootfs-void.work-seed.ext4`
This path is intentionally local-only and does not change the default Debian
image flow. `make void-kernel` stages an actual Void `linux6.12` kernel package
under `./runtime/void-kernel/`, including the raw `vmlinuz`, extracted
Firecracker `vmlinux`, a matching `initramfs`, the matching config, and the
matching modules tree. The initramfs is generated locally with `dracut`
against the downloaded Void sysroot so the kernel, initrd, and modules stay
aligned. `make rootfs-void` then prefers that staged modules tree when it exists;
otherwise it falls back to the runtime bundle modules. The rootfs builder
itself still builds a lean `x86_64-glibc` Void userspace with:
- `bash` installed for interactive/admin use
- pinned `mise` installed at `/usr/local/bin/mise`, activated for `root` bash shells
- `opencode` installed through `mise`, with `/usr/local/bin/opencode` available by default
- a guest network bootstrap that configures the VM NIC from the kernel `ip=` boot arg
- a host-reachable `opencode serve` runit service enabled on guest TCP port `4096`
- `docker` plus `docker-compose` installed from Void packages
- the `docker` runit service enabled, with Docker netfilter/forwarding kernel prep
- `openssh` enabled under runit
- the bundled `banger-vsock-agent` health agent enabled under runit
- `root` normalized to `/bin/bash` while keeping `/bin/sh` as the distro's system shell
- a generated `/root` work-seed for fast creates
It still keeps some Debian-oriented extras out for now:
- no tmux plugin defaults
The builder fetches official static XBPS tools and packages from the Void
mirror during the build. The kernel fetcher and rootfs builder currently
support only `x86_64`.
The package set comes from [`packages.void`](/home/thales/projects/personal/banger/packages.void).
You can override the mirror, size, output path, or kernel package directly:
```bash
./make-void-kernel.sh --kernel-package linux6.12
./make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G
```
The fastest local iteration loop does not require changing your default image
config at all:
```bash
make void-kernel
make rootfs-void
make void-register
./banger vm create --image void-exp --name void-dev
./banger vm ssh void-dev
```
Rebuild the staged Void kernel or Void rootfs, then recreate existing
`void-exp` VMs after changing the package set, guest provisioning, or staged
kernel artifacts; restart alone will not update the image contents, kernel, or
`/root` work-seed.
There is also a smoke path for the experimental image:
```bash
make verify-void
```
`make void-register` uses the unmanaged image registration path to create or
update a `void-exp` image record in place, so repeated rebuilds do not require
editing `~/.config/banger/config.toml`. It expects a complete staged Void
kernel set under `./runtime/void-kernel/` and points the experimental image at
the staged Void `vmlinux`, `initramfs`, and matching modules tree.
There is also a one-step helper target:
```bash
make void-vm VOID_VM_NAME=void-a
```
If you really want the Void image to become your default for `vm create`
without `--image`, use the checked-in override template at
[`examples/void-exp.config.toml`](/home/thales/projects/personal/banger/examples/void-exp.config.toml)
and merge its four settings into `~/.config/banger/config.toml`.
`banger image build` remains Debian-only in this pass. Do not point
`default_base_rootfs` at the Void artifact yet.
## Registering Unmanaged Images
You can also register any local rootfs as an unmanaged image record without
changing global defaults:
```bash
banger image register --name local-test --rootfs /abs/path/rootfs.ext4
```
Optional paths let you point at an existing work seed, kernel, initrd, modules,
and package manifest:
```bash
banger image register \
--name void-exp \
--rootfs ./runtime/rootfs-void.ext4 \
--work-seed ./runtime/rootfs-void.work-seed.ext4 \
--kernel ./runtime/void-kernel/boot/vmlinux-6.12.77_1 \
--initrd ./runtime/void-kernel/boot/initramfs-6.12.77_1.img \
--modules ./runtime/void-kernel/lib/modules/6.12.77_1 \
--packages ./packages.void
```
If an unmanaged image with the same name already exists, `image register`
updates it in place so future `vm create --image <name>` calls pick up the new
artifacts immediately.
## Maintaining The Runtime Bundle
The checked-in [`runtime-bundle.toml`](/home/thales/projects/personal/banger/runtime-bundle.toml)
is a template. Keep `bundle_metadata` accurate there, but use a separate local
manifest copy when you need concrete `url` and `sha256` values for bootstrap
testing or publication.
Package a local `./runtime/` tree into an archive:
```bash
make runtime-package
```
That writes `dist/banger-runtime.tar.gz` and prints its SHA256 so you can update
a local manifest copy before testing bootstrap changes or publishing the
archive elsewhere.
## Benchmarking Create Time
Benchmark the current host's `vm create` wall time plus first-SSH readiness:
```bash
make bench-create
```
Pass options through `ARGS`, for example:
```bash
make bench-create ARGS="--runs 3 --image docker-dev"
```
The benchmark prints JSON with:
- `create_ms`: wall time for `banger vm create`, including full readiness
gating for the guest vsock agent and default `opencode` service
- `ssh_ready_ms`: wall time from create start until `banger vm ssh <vm> -- true`
succeeds
## Remaining Shell Helpers
The runtime VM lifecycle is managed through `banger`. The remaining shell scripts are not the primary user interface:
- `customize.sh`: manual reference flow for rootfs customization; `banger image build` is now Go-native, but the script still reads
assets from `BANGER_RUNTIME_DIR` and stores transient state under
`BANGER_STATE_DIR`/XDG state
- `make-rootfs.sh`: convenience wrapper for rebuilding `./runtime/rootfs-docker.ext4`
- `interactive.sh`: manual one-off rootfs customization over SSH
- `packages.sh`: shell helper library
- `verify.sh`: smoke test for the Go workflow (`./verify.sh --nat` adds NAT coverage)