One-command development sandboxes on Firecracker microVMs. https://git.thaloco.com/thaloco/banger/
Find a file
Thales Maciel 687fcf0b59
vm state: split transient kernel/process handles off the durable schema
Separates what a VM IS (durable intent + identity + deterministic
derived paths — `VMRuntime`) from what is CURRENTLY TRUE about it
(firecracker PID, tap device, loop devices, dm-snapshot target — new
`VMHandles`). The durable state lives in the SQLite `vms` row; the
transient state lives in an in-memory cache on the daemon plus a
per-VM `handles.json` scratch file inside VMDir, rebuilt at startup
from OS inspection. Nothing kernel-level rides the SQLite schema
anymore.

Why:

  Persisting ephemeral process handles to SQLite forced reconcile to
  treat "running with a stale PID" as a first-class case and mix it
  with real state transitions. The schema described what we last
  observed, not what the VM is. Every time the observation model
  shifted (tap pool, DM naming, pgrep fallback) the reconcile logic
  grew a new branch. Splitting lets each layer own what it's good at:
  durable records describe intent, in-memory cache + scratch file
  describe momentary reality.

Shape:

  - `model.VMHandles` = PID, TapDevice, BaseLoop, COWLoop, DMName,
    DMDev. Never in SQLite.
  - `VMRuntime` keeps: State, GuestIP, APISockPath, VSockPath,
    VSockCID, LogPath, MetricsPath, DNSName, VMDir, SystemOverlay,
    WorkDiskPath, LastError. All durable or deterministic.
  - `handleCache` on `*Daemon` — mutex-guarded map + scratch-file
    plumbing (`writeHandlesFile` / `readHandlesFile` /
    `rediscoverHandles`). See `internal/daemon/vm_handles.go`.
  - `d.vmAlive(vm)` replaces the 20+ inline
    `vm.State==Running && ProcessRunning(vm.Runtime.PID, apiSock)`
    spreads. Single source of truth for liveness.
  - Startup reconcile: per running VM, load the scratch file, pgrep
    the api sock, either keep (cache seeded from scratch) or demote
    to stopped (scratch handles passed to cleanupRuntime first so DM
    / loops / tap actually get torn down).

Verification:

  - `go test ./...` green.
  - Live: `banger vm run --name handles-test -- cat /etc/hostname`
    starts; `handles.json` appears in VMDir with the expected PID,
    tap, loops, DM.
  - `kill -9 $(pgrep bangerd)` while the VM is running, re-invoke the
    CLI, daemon auto-starts, reconcile recognises the VM as alive,
    `banger vm ssh` still connects, `banger vm delete` cleans up.

Tests added:

  - vm_handles_test.go: scratch-file roundtrip, missing/corrupt file
    behaviour, cache concurrency, rediscoverHandles prefers pgrep
    over scratch, returns scratch contents even when process is
    dead (so cleanup can tear down kernel state).
  - vm_test.go: reconcile test rewritten to exercise the new flow
    (write scratch → reconcile reads it → verifies process is gone →
    issues dmsetup/losetup teardown).

ARCHITECTURE.md updated; `handles` added to Daemon field docs.
2026-04-19 14:18:13 -03:00
cmd cli: restrict ExitCodeError unwrap to the CLI's own type 2026-04-17 15:37:47 -03:00
configs Generic kernel + init= boot path for OCI-pulled images 2026-04-16 20:12:56 -03:00
docs release prep: opt-in web UI, make uninstall, fix stale kernel-catalog docs 2026-04-19 12:43:58 -03:00
images/golden Prune legacy void/alpine + customize.sh flows 2026-04-18 15:39:53 -03:00
internal vm state: split transient kernel/process handles off the durable schema 2026-04-19 14:18:13 -03:00
scripts Prune legacy void/alpine + customize.sh flows 2026-04-18 15:39:53 -03:00
.gitignore coverage: make targets + close zero-cov gaps (namegen, sessionstream) 2026-04-18 17:44:37 -03:00
AGENTS.md coverage: make targets + close zero-cov gaps (namegen, sessionstream) 2026-04-18 17:44:37 -03:00
go.mod Phase 1: imagepull package — pull, flatten, ext4 2026-04-16 17:22:13 -03:00
go.sum Phase 1: imagepull package — pull, flatten, ext4 2026-04-16 17:22:13 -03:00
LICENSE Add LICENSE, update .gitignore, add security note to README 2026-04-14 16:54:33 -03:00
Makefile release prep: opt-in web UI, make uninstall, fix stale kernel-catalog docs 2026-04-19 12:43:58 -03:00
README.md guest sshd: drop DEBUG3 + StrictModes no; normalise /root perms 2026-04-19 13:40:40 -03:00

banger

One-command development sandboxes on Firecracker microVMs.

Quick start

make install
banger vm run --name sandbox

That's it. banger vm run auto-pulls the default golden image (Debian bookworm with systemd, sshd, Docker CE, git, jq, mise, and the usual dev tools) and kernel, creates a VM, starts it, and drops you into an interactive ssh session. First run takes a couple minutes (bundle download); subsequent vm runs are seconds.

Requirements

  • Linux with /dev/kvm
  • sudo
  • Firecracker on PATH, or firecracker_bin set in config
  • host tools checked by banger doctor

Build + install

make install

Installs banger (CLI), bangerd (daemon, auto-starts on first CLI call), and banger-vsock-agent (companion, under $PREFIX/lib/banger/).

To remove the binaries (and stop the daemon):

make uninstall

User data stays in place — the target prints the paths so you can rm -rf them if you want a full purge:

  • ~/.config/banger/ — config, managed SSH keys
  • ~/.local/state/banger/ — VM records, rootfs images, kernels, daemon DB/log
  • ~/.cache/banger/ — OCI layer cache

Shell completion

banger ships completion scripts for bash, zsh, fish, and powershell. Tab-completion covers subcommands, flags, and live resource names (VM, image, kernel, session) looked up from the daemon. With the daemon down, resource completion silently returns nothing — no file-completion fallback.

# bash (system-wide)
banger completion bash | sudo tee /etc/bash_completion.d/banger

# zsh (user-local; ~/.zfunc must be on fpath)
banger completion zsh > ~/.zfunc/_banger

# fish
banger completion fish > ~/.config/fish/completions/banger.fish

banger completion --help shows the shell-specific loading recipes.

vm run

One command, four common shapes:

banger vm run                          # bare sandbox — drops into ssh
banger vm run ./repo                   # workspace at /root/repo — drops into ssh
banger vm run ./repo -- make test      # workspace + run command, exits with its status
banger vm run --rm -- script.sh        # ephemeral: VM is deleted on exit
  • Bare mode gives you a clean shell.
  • Workspace mode (path given) copies the repo's tracked + untracked non-ignored files into /root/repo and kicks off a best-effort mise tooling bootstrap from the repo's .mise.toml / .tool-versions. Log: /root/.cache/banger/vm-run-tooling-<repo>.log.
  • Command mode (-- <cmd>) runs the command in the guest; exit code propagates through banger.

Disconnecting from an interactive session leaves the VM running. Use vm stop / vm delete to clean up — or pass --rm so the VM auto-deletes once the session / command exits.

--branch and --from apply only to workspace mode. --rm skips the delete when the initial ssh wait times out, so a wedged sshd leaves the VM alive for banger vm logs inspection.

Hostnames: reaching <vm>.vm

banger's daemon runs a DNS server for the .vm zone. With host-side DNS routing you can ssh root@sandbox.vm or curl http://sandbox.vm:3000 from anywhere on the host — no copy-pasting guest IPs. On systemd-resolved hosts this is auto-wired; everywhere else there's a short recipe. See docs/dns-routing.md.

Image catalog

banger image pull <name> fetches a pre-built bundle from the embedded catalog. vm run calls this for you on demand.

Today's catalog:

Name What it is
debian-bookworm Debian 12 slim + sshd + docker + dev tools

See docs/image-catalog.md for the bundle format and how to publish a new entry.

Config

Config lives at ~/.config/banger/config.toml. All keys optional.

Most commonly set:

  • default_image_name — image used when --image is omitted (default debian-bookworm, auto-pulled from the catalog if not local).
  • ssh_key_path — host SSH key. If unset, banger creates ~/.config/banger/ssh/id_ed25519.
  • firecracker_bin — override the auto-resolved PATH lookup.
  • web_listen_addr — experimental web UI; disabled by default. Set e.g. "127.0.0.1:7777" to enable.

Full key list in internal/config/config.go.

vm_defaults — sizing for new VMs

Every vm run / vm create prints a spec: line up front showing the vCPU, RAM, and disk the VM will get. When the flags aren't set, those values come from:

  1. [vm_defaults] in config (if present, wins).
  2. Host-derived heuristics (roughly: cpus/4 capped at 4, ram/8 capped at 8 GiB, 8 GiB disk).
  3. Built-in constants (floor).

banger doctor prints the effective defaults with provenance.

[vm_defaults]
vcpu = 4
memory_mib = 4096
disk_size = "16G"

All keys optional — omit whichever you want banger to decide.

file_sync — host → guest file copies

[[file_sync]]
host = "~/.aws"          # whole directory, recursive
guest = "~/.aws"

[[file_sync]]
host = "~/.config/gh/hosts.yml"
guest = "~/.config/gh/hosts.yml"

[[file_sync]]
host = "~/bin/my-script"
guest = "~/bin/my-script"
mode = "0755"            # optional; default 0600 for files

Runs at vm create time. Each entry copies hostguest onto the VM's work disk (mounted at /root in the guest). Guest paths must live under ~/ or /root/.... Default is no entries — add the ones you want.

Advanced

The common path is vm run. Power-user flows (vm create, OCI pull for arbitrary images, image register, long-lived sessions) are documented in docs/advanced.md.

Security

Guest VMs are single-user development sandboxes, not multi-tenant servers. Each guest's sshd is configured with:

PermitRootLogin prohibit-password
PubkeyAuthentication yes
PasswordAuthentication no
KbdInteractiveAuthentication no
AuthorizedKeysFile /root/.ssh/authorized_keys

The host SSH key is the only authentication mechanism. StrictModes is on (sshd's default); banger normalises /root, /root/.ssh, and authorized_keys perms at provisioning time so the default passes.

VMs are reachable only through the host bridge network (172.16.0.0/24 by default). Do not expose the bridge interface or guest IPs to an untrusted network.

The web UI is disabled by default. If you opt in via web_listen_addr, it binds 127.0.0.1 — do not publish it to a shared network.

Further reading