banger/docs/advanced.md
Thales Maciel 221fb03d68
cli QoL: vm prune, list→ls aliases, delete→rm aliases
- `banger vm prune` sweeps every non-running VM (stopped, created,
  error) with an interactive confirmation; -f/--force skips the prompt.
  Partial failures report which VM failed and exit non-zero.
- list commands gain `ls` alias: vm list already had it; added to image
  list, kernel list, and vm session list.
- delete commands gain `rm` alias: vm delete and image delete. kernel
  rm already aliased delete/remove.

Uses new test seams (vmListFunc) plus the existing vmDeleteFunc so
prune unit-tests without touching the daemon socket.
2026-04-19 12:17:46 -03:00

105 lines
3.5 KiB
Markdown

# Advanced flows
`banger vm run` covers the common sandbox case. This doc is for the
rest: scripting, arbitrary images, custom rootfs stacks, long-lived
guest processes.
## `vm create` — the low-level primitive
Use when you want to provision without starting, or when you need to
script VM creation piecewise.
```bash
banger vm create --image debian-bookworm --name testbox --no-start
banger vm start testbox
banger vm ssh testbox
banger vm stop testbox
banger vm delete testbox
```
Sweep every non-running VM (stopped, created, error) with:
```bash
banger vm prune # interactive confirmation
banger vm prune -f # skip the prompt
```
`vm create` is synchronous by default, but on a TTY it shows live
progress until the VM is fully ready.
## `image pull <oci-ref>` — arbitrary container images
For images outside banger's catalog, pull from any OCI registry:
```bash
banger image pull docker.io/library/alpine:3.20 --kernel-ref generic-6.12
```
Layers are flattened, ownership is fixed (setuid binaries, root-owned
config preserved), banger's guest agents are injected, and a first-boot
systemd service installs `openssh-server` via the guest's package
manager so the VM is reachable on first boot.
See [`docs/oci-import.md`](oci-import.md) for supported distros,
caveats, and the `internal/imagepull` design.
## `image register` — existing host-side stack
If you already have an ext4 rootfs, a kernel, optional initrd, and
optional modules as files on disk:
```bash
banger image register --name base \
--rootfs /abs/path/rootfs.ext4 \
--kernel-ref generic-6.12
```
You can mix `--kernel-ref` (a cataloged kernel) with `--rootfs` from
disk, or pass `--kernel /abs/path/vmlinux` for a one-off kernel.
For reproducible custom images, write a Dockerfile and publish it to
an image catalog. See [`docs/image-catalog.md`](image-catalog.md).
## Workspace + session primitives
Long-lived guest commands managed by the daemon, attachable over a
local Unix socket bridge. Useful for agent/background processes that
need to survive SSH disconnects.
```bash
banger vm workspace prepare <vm> ./other-repo --guest-path /root/repo
banger vm session start <vm> --name planner --cwd /root/repo \
--stdin-mode pipe -- pi --mode rpc
banger vm session attach <vm> planner
banger vm session logs <vm> planner --stream stderr
banger vm session stop <vm> planner
```
Details:
- `vm workspace prepare` materialises a local git checkout into a
running VM. Default guest path `/root/repo`; default mode is a
shallow metadata copy plus tracked and untracked non-ignored
overlay.
- `vm session start` launches a daemon-managed long-lived guest
command. The daemon preflights that the guest `cwd` exists and the
command is on guest `PATH` before launch. Use `--stdin-mode pipe`
when you need live `attach`.
- `vm session attach` is exclusive and same-host only. Pipe-mode
sessions survive daemon restarts.
## Inspecting boot failures
When a VM's create flow errors ("ssh did not come up within 90s" or
similar), the VM is kept alive for inspection:
- `banger vm logs <name>` — the firecracker serial console output,
the best window into a stuck boot (systemd unit failures, kernel
panics, missing modules).
- `banger vm ports <name>` — what's listening in the guest. Works as
long as banger's vsock agent has come up, even if SSH is wedged.
- `banger vm show <name>` — daemon-side state (IP, PID, overlay
paths).
`--rm` on `vm run` intentionally does NOT fire when the initial ssh
wait times out, so the VM stays around for post-mortem.