banger/README.md
Thales Maciel 2e4d4b14da
Phase 4: OCI import docs
New docs/oci-import.md covers the full Phase A story:
 - end-user flow (kernel pull + image pull + image list)
 - what works now (layer replay + whiteouts, path-traversal
   hardening, content-aware sizing, layer caching, composition
   with image build)
 - what does not work yet (direct boot due to ownership
   caveat, private registries, non-amd64 platforms)
 - architecture of internal/imagepull + the daemon orchestrator
 - path layout (OCI cache, staging, published)
 - tech debt: the three plausible ownership-fixup approaches
   (debugfs, hcsshim/tar2ext4, user namespaces) with honest
   trade-offs for Phase B to choose from later
 - trust model (digest chain covers transport; signature
   verification out of scope)

README.md gains an image pull example alongside image register
+ --kernel-ref, with a pointer to the docs and an honest "pulled
images are a base for image build, not yet directly bootable"
warning.

AGENTS.md gets the one-line note pointing at the new doc.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-16 17:37:07 -03:00

334 lines
9.2 KiB
Markdown

# banger
`banger` manages Firecracker development VMs with a local daemon, managed image artifacts, and an experimental localhost web UI.
## Requirements
- Linux with `/dev/kvm`
- `sudo`
- Firecracker installed on `PATH`, or `firecracker_bin` set in config
- The usual host tools checked by `./build/bin/banger doctor`
`banger` now owns complete managed image sets. A managed image includes:
- `rootfs`
- optional `work-seed`
- `kernel`
- optional `initrd`
- optional `modules`
There is no runtime bundle anymore.
## Build
```bash
make build
```
This writes:
- `./build/bin/banger`
- `./build/bin/bangerd`
- `./build/bin/banger-vsock-agent`
## Install
```bash
make install
```
That installs:
- `banger`
- `bangerd`
- the `banger-vsock-agent` companion helper under `../lib/banger/`
## Config
Config lives at `~/.config/banger/config.toml`.
Supported keys:
- `log_level`
- `web_listen_addr`
- `firecracker_bin`
- `ssh_key_path`
- `default_image_name`
- `auto_stop_stale_after`
- `stats_poll_interval`
- `metrics_poll_interval`
- `bridge_name`
- `bridge_ip`
- `cidr`
- `tap_pool_size`
- `default_dns`
If `ssh_key_path` is unset, banger creates and uses:
- `~/.config/banger/ssh/id_ed25519`
`default_image_name` now only means “use this registered image when `vm create` omits `--image`”. The daemon does not auto-register images from host paths.
## Core Workflow
Check the host:
```bash
./build/bin/banger doctor
```
Register an existing host-side image stack:
```bash
./build/bin/banger image register \
--name base \
--rootfs /abs/path/rootfs.ext4 \
--kernel /abs/path/vmlinux \
--initrd /abs/path/initrd.img \
--modules /abs/path/modules
```
Or pull a pre-built kernel from the catalog and reference it by name:
```bash
./build/bin/banger kernel list --available
./build/bin/banger kernel pull void-6.12
./build/bin/banger image register \
--name base \
--rootfs /abs/path/rootfs.ext4 \
--kernel-ref void-6.12
```
See [`docs/kernel-catalog.md`](docs/kernel-catalog.md) for catalog
maintenance.
Or pull a rootfs directly from any OCI registry (Docker Hub, GHCR, …):
```bash
./build/bin/banger image pull docker.io/library/debian:bookworm \
--kernel-ref void-6.12
```
`image pull` downloads the image, flattens its layers into an ext4
rootfs, and registers it as a managed banger image. Experimental — see
[`docs/oci-import.md`](docs/oci-import.md) for current limitations
(notably: file-ownership caveat means pulled images are a base for
`image build`, not yet directly bootable).
Build a managed image from an existing registered image:
```bash
./build/bin/banger image build \
--name devbox \
--from-image base \
--docker
```
Promote an unmanaged image into daemon-owned managed artifacts:
```bash
./build/bin/banger image promote base
```
Create and use a VM:
```bash
./build/bin/banger vm create --image devbox --name testbox
./build/bin/banger vm ssh testbox
./build/bin/banger vm stop testbox
```
`vm create` stays synchronous by default, but on a TTY it now shows live progress until the VM is fully ready.
Start a repo-backed VM session:
```bash
./build/bin/banger vm run
./build/bin/banger vm run ../some-repo --branch feature/alpine --from HEAD
```
`vm run` resolves the enclosing git repository, creates a VM, copies a git checkout plus current tracked and untracked non-ignored files into `/root/repo`, starts a best-effort guest tooling bootstrap that only uses `mise`, prints next-step commands, and exits. It does not auto-attach `opencode` anymore. The bootstrap runs asynchronously and logs its output inside the guest.
After `vm run`, use one of:
```bash
./build/bin/banger vm ssh <vm-name>
opencode attach http://<vm-name>.vm:4096 --dir /root/repo
./build/bin/banger vm acp <vm-name>
./build/bin/banger vm ssh <vm-name> -- "cd /root/repo && claude"
./build/bin/banger vm ssh <vm-name> -- "cd /root/repo && pi"
```
For ACP-aware host tools, `./build/bin/banger vm acp <vm-name>` bridges stdio to guest `opencode acp` over SSH. It uses `/root/repo` when that checkout exists, otherwise `/root`, and `--cwd` lets you override the guest working directory explicitly.
If you want reusable orchestration primitives instead of the `vm run` convenience flow, use the daemon-backed workspace and session commands directly:
```bash
./build/bin/banger vm workspace prepare <vm-name>
./build/bin/banger vm workspace prepare <vm-name> ../other-repo --guest-path /root/repo --readonly
./build/bin/banger vm session start <vm-name> --name planner --cwd /root/repo --stdin-mode pipe -- pi --mode rpc --no-session
./build/bin/banger vm session list <vm-name>
./build/bin/banger vm session attach <vm-name> planner
./build/bin/banger vm session logs <vm-name> planner --stream stderr
./build/bin/banger vm session stop <vm-name> planner
```
`vm workspace prepare` materializes a local git checkout into a running VM. The default guest path is `/root/repo` and the default mode is a shallow metadata copy plus tracked and untracked non-ignored overlay. Repositories with git submodules must use `--mode full_copy`; the metadata-based modes still reject them.
`vm session start` creates a daemon-managed long-lived guest command. The daemon preflights that the requested guest `cwd` exists and that the main command, plus any repeated `--require-command` entries, exist in guest `PATH` before launch. Use `--stdin-mode pipe` when you need live `attach`; otherwise use the default detached mode and inspect sessions with `list`, `show`, `logs`, `stop`, and `kill`.
`vm session attach` is currently exclusive and same-host only. The daemon exposes a local Unix socket bridge using `stdio_mux_v1`, so only one active attach is allowed at a time. Pipe-mode sessions keep enough guest-side state for the daemon to rebuild that bridge after a daemon restart.
## Web UI (experimental)
`bangerd` serves an experimental local web UI by default at:
- `http://127.0.0.1:7777`
The UI is convenient for local observability but is **not a stable or
supported interface**. Its endpoints, layout, and behaviour may change
without notice, and it has not been hardened for anything beyond single-user
localhost use. Do not expose the listen address to a shared network.
See the effective URL with:
```bash
./build/bin/banger daemon status
```
Disable it with:
```toml
web_listen_addr = ""
```
## Guest Services
Provisioned glibc-backed images include:
- `banger-vsock-agent`
- guest networking bootstrap
- `mise`
- `opencode`
- `claude`
- `pi`
- a default guest `opencode` service on `0.0.0.0:4096`
Alpine currently remains `opencode`-only.
If these host auth files exist, `banger` syncs them into the guest on VM start:
- `~/.local/share/opencode/auth.json` -> `/root/.local/share/opencode/auth.json`
- `~/.claude/.credentials.json` -> `/root/.claude/.credentials.json`
- `~/.pi/agent/auth.json` -> `/root/.pi/agent/auth.json`
Changes on the host take effect after the VM is restarted. Session/history directories are not copied.
From the host:
```bash
./build/bin/banger vm ports testbox
opencode attach http://<guest-ip>:4096
```
## Manual Helpers
The shell helpers are now explicit manual workflows under `./build/manual`.
Rebuild a Debian-style manual rootfs:
```bash
make rootfs ARGS='--base-rootfs /abs/path/rootfs.ext4 --kernel /abs/path/vmlinux --initrd /abs/path/initrd.img --modules /abs/path/modules'
```
The output lands in:
- `./build/manual/rootfs-docker.ext4`
- `./build/manual/rootfs-docker.work-seed.ext4`
## Experimental Void Flow
Stage a Void kernel:
```bash
make void-kernel
```
Build the experimental Void rootfs:
```bash
make rootfs-void
```
Register it:
```bash
make void-register
```
That flow uses:
- `./build/manual/void-kernel/`
- `./build/manual/rootfs-void.ext4`
- `./build/manual/rootfs-void.work-seed.ext4`
## Experimental Alpine Flow
Stage an Alpine virt kernel:
```bash
make alpine-kernel
```
Build the experimental Alpine rootfs:
```bash
make rootfs-alpine
```
Register it:
```bash
make alpine-register
```
Create a VM from it:
```bash
./build/bin/banger vm create --image alpine --name alpine-dev
```
That flow uses:
- `./build/manual/alpine-kernel/`
- `./build/manual/rootfs-alpine.ext4`
- `./build/manual/rootfs-alpine.work-seed.ext4`
The experimental Alpine flow stages a pinned Alpine release by default. Override
that pin with `ALPINE_RELEASE=...` when running the `make alpine-kernel` and
`make rootfs-alpine` helpers if you need a different patch release.
Alpine support currently applies to the explicit register-and-run flow above.
The generic `banger image build --from-image ...` path remains Debian/systemd-
oriented and should not be treated as an Alpine image builder.
## Security
Guest VMs are single-user development sandboxes, not multi-tenant servers.
Every provisioned image is configured with:
```
PermitRootLogin yes
StrictModes no
```
This is intentional. The host SSH key is the only authentication mechanism,
no password auth is enabled, and VMs are reachable only through the host
bridge network (`172.16.0.0/24` by default). Do not expose the bridge
interface or the VM guest IPs to an untrusted network.
## Notes
- Firecracker is resolved from `PATH` by default.
- Managed image delete removes the daemon-owned artifact dir.
- The companion vsock helper is internal to the install/build layout, not a user-configured runtime path.