`scripts/smoke.sh` was a 600-line linear script: no way to see what it covers without reading the whole thing, and no way to run a single scenario when iterating. Every iteration paid the full ~5-10 min suite, which made fast feedback loops painful enough to avoid the suite. Refactor into a registry + per-scenario functions: - Top-of-file SMOKE_SCENARIOS (ordered) + SMOKE_DESCS (one-line desc per scenario) + SMOKE_CLASS (pure / repodir / global) drive both listing and dispatch. The 21 existing scenario blocks become scenario_<name> functions. Bodies are the inline blocks verbatim, modulo the workspace fixture move described below. - New CLI: --list (cheap discovery, no install / no env-vars), --scenario NAME (or NAME,NAME,...), --jobs N (parallel dispatch), -h / --help. - New setup_fixtures runs once after the install/doctor/restart preamble and produces the throwaway git repo at $repodir that 'repodir'-class scenarios consume. Lifted out of scenario_workspace_run so single- scenario invocations (e.g. --scenario workspace_dryrun) get the fixture even when the scenario that historically built it isn't selected. - Wipe ~/.local/state/banger/ssh/known_hosts in the install preamble. `system uninstall --purge` clears /var/lib/banger but the user-side known_hosts persists by design — and smoke creates VMs that reuse guest IPs (172.16.0.2 etc.) with fresh host keys every run, so a leftover entry trips StrictHostKeyChecking and the daemon's wait- for-ssh sees only timeouts. This was the real cause of the "guest ssh did not come up" flakes that surface across smoke iterations. Parallel dispatch: - --jobs N opts into a slot-limited pool: 'pure' scenarios fan out as individual jobs; 'repodir' scenarios fuse into a single serial chain (since they mutate $repodir in registry order); 'global' scenarios run serially after the pool, one at a time. - Cap is min(N, 8) — each parallel slot runs an 8 GiB VM, so RAM is the binding constraint. - Parallel-mode stdout/stderr per scenario buffer to per-scenario logs and emit one PASS/FAIL line on completion; on FAIL the buffer is dumped. Serial mode (--jobs 1, the default) keeps stdout unbuffered exactly as before. - Parallelism is documented as experimental in --help: it surfaces real daemon-side concurrency bugs (image auto-pull manifest race, work-seed-refresh race on the shared work-seed.ext4) that don't appear in serial mode and that need their own fix in the daemon. Serial (--jobs 1) is the reliable path; --jobs N is for fast- iteration dev work where occasional re-runs are acceptable. Exit codes: 0 ok, 1 assertion failed, 2 usage error (unknown scenario, missing SCENARIO=), 77 explicit selection skipped (NAT when sudo iptables is unavailable AND nat is the only selected scenario; soft-skip otherwise). Makefile additions: - `make smoke-list` — cheap discovery, no smoke-build dep, no env vars. - `make smoke-one SCENARIO=name` — single-scenario run, full preamble. MAKECMDGOALS guard catches missing SCENARIO= before any rebuild. - `make smoke JOBS=N` — passes through to the script's --jobs N. - Help text covers all three. Verified: serial full suite passes 21/21 in ~140s on this host; make smoke-one SCENARIO=workspace_restart runs the recently-added regression test alone in ~50s. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| cmd | ||
| configs | ||
| docs | ||
| images/golden | ||
| internal | ||
| scripts | ||
| .gitignore | ||
| AGENTS.md | ||
| go.mod | ||
| go.sum | ||
| LICENSE | ||
| Makefile | ||
| mise.toml | ||
| README.md | ||
banger
One-command development sandboxes on Firecracker microVMs.
Quick start
make build
sudo ./build/bin/banger system install --owner "$USER"
banger vm run --name sandbox
That's it. banger vm run auto-pulls the default golden image (Debian
bookworm with systemd, sshd, Docker CE, git, jq, mise, and the usual
dev tools) and kernel, creates a VM, starts it, and drops you into
an interactive ssh session. First run takes a couple minutes (bundle
download); subsequent vm runs are seconds.
Supported host path
banger's supported host/runtime path is:
- Linux on
x86_64 / amd64 systemdas the host init/service managerbangerd.servicerunning as the installed owner userbangerd-root.servicerunning as the privileged host helper
Other setups may work with manual adaptation, but they are not the supported operating model for this repo.
Requirements
- x86_64 / amd64 Linux — arm64 is not supported today. The companion
binaries, the published kernel catalog, and the OCI import path all
assume
linux/amd64.banger doctorsurfaces this as a failing check on other architectures. - systemd on the host — this is the supported service-management
path. banger's supported install/run model is the owner-user
bangerd.serviceplus the privilegedbangerd-root.serviceinstalled bybanger system install. /dev/kvmsudofor the install/admin commands (system install,system restart,system uninstall)- Firecracker on
PATH, orfirecracker_binset in config - host tools checked by
banger doctor
Build + install
make build
sudo ./build/bin/banger system install --owner "$USER"
This installs two systemd units, copies the current banger,
bangerd, and banger-vsock-agent binaries into /usr/local, writes
install metadata under /etc/banger, and starts both services:
bangerd.serviceruns as the configured owner user and exposes the public CLI socket at/run/banger/bangerd.sock.bangerd-root.serviceruns as root and handles the narrow set of privileged host operations over the private helper socket at/run/banger-root/bangerd-root.sock.
After that, normal daily commands such as banger vm run and
banger image pull are unprivileged.
This systemd service flow is the supported path. If you're not on a
host that can run both services, you're outside the supported host
model even if some pieces happen to work.
The split matters:
bangerd.serviceruns as the owner user, keeps its writable state in/var/lib/banger,/var/cache/banger, and/run/banger, and sees the owner home read-only.bangerd-root.serviceis the only process that keeps elevated host capabilities, and that capability set is limited to the host-kernel primitives banger actually uses (CAP_CHOWN,CAP_SYS_ADMIN,CAP_NET_ADMIN).
To inspect or refresh the services:
banger system status
sudo banger system restart
To remove the system services:
sudo banger system uninstall
Add --purge if you also want to remove system-owned VM/image/cache
state under /var/lib/banger, /var/cache/banger, /run/banger, and
/run/banger-root. User config stays in place under your home
directory:
~/.config/banger/— config, optionalssh_config~/.local/state/banger/ssh/— user SSH key + known_hosts
Shell completion
banger ships completion scripts for bash, zsh, fish, and
powershell. Tab-completion covers subcommands, flags, and live
resource names (VM, image, kernel) looked up from the installed
services. With the services down, resource completion silently
returns nothing — no file-completion fallback.
# bash (system-wide)
banger completion bash | sudo tee /etc/bash_completion.d/banger
# zsh (user-local; ~/.zfunc must be on fpath)
banger completion zsh > ~/.zfunc/_banger
# fish
banger completion fish > ~/.config/fish/completions/banger.fish
banger completion --help shows the shell-specific loading
recipes.
vm run
One command, four common shapes:
banger vm run # bare sandbox — drops into ssh
banger vm run ./repo # workspace at /root/repo — drops into ssh
banger vm run ./repo -- make test # workspace + run command, exits with its status
banger vm run --rm -- script.sh # ephemeral: VM is deleted on exit
- Bare mode gives you a clean shell.
- Workspace mode (path given) copies the repo's git-tracked files
into
/root/repoand kicks off a best-effortmisetooling bootstrap from the repo's.mise.toml/.tool-versions. Log:/root/.cache/banger/vm-run-tooling-<repo>.log. Untracked files (including local.env, scratch notes, credentials that aren't gitignored) are skipped by default — pass--include-untrackedto also ship them. Pass--dry-runto print the exact file list and exit without creating a VM. - Command mode (
-- <cmd>) runs the command in the guest; exit code propagates throughbanger.
Disconnecting from an interactive session leaves the VM running. Use
vm stop / vm delete to clean up — or pass --rm so the VM
auto-deletes once the session / command exits.
--branch, --from, --include-untracked, and --dry-run apply
only to workspace mode. --rm skips the delete when the initial ssh
wait times out, so a wedged sshd leaves the VM alive for banger vm logs inspection.
Hostnames: reaching <vm>.vm
banger's owner daemon runs a DNS server for the .vm zone. With
host-side DNS routing you can curl http://sandbox.vm:3000 from
anywhere on the host — no copy-pasting guest IPs. On
systemd-resolved hosts the owner daemon asks the root helper to
auto-wire this and that is the supported path. Everywhere else
there's a best-effort manual recipe. See
docs/dns-routing.md.
Optional: ssh <name>.vm shortcut
banger vm ssh <name> works out of the box. If you'd also like plain
ssh sandbox.vm from any terminal (using banger's key + known_hosts),
opt in:
banger ssh-config --install # adds `Include ~/.config/banger/ssh_config`
# to ~/.ssh/config in a marker-fenced block
banger ssh-config --uninstall # reverse it
banger ssh-config # show the include line to paste manually
banger never touches ~/.ssh/config on its own — the daemon keeps its
own known_hosts under /var/lib/banger/ssh/known_hosts, while
banger ssh-config keeps the user-facing file fresh at
~/.config/banger/ssh_config; whether and how it's
pulled into your SSH config is up to you.
Image catalog
banger image pull <name> fetches a pre-built bundle from the
embedded catalog. vm run calls this for you on demand.
Today's catalog:
| Name | What it is |
|---|---|
debian-bookworm |
Debian 12 slim + sshd + docker + dev tools |
See docs/image-catalog.md for the bundle
format and how to publish a new entry.
Config
Config lives at ~/.config/banger/config.toml. All keys optional.
Most commonly set:
default_image_name— image used when--imageis omitted (defaultdebian-bookworm, auto-pulled from the catalog if not local).ssh_key_path— host SSH key. If unset, banger creates~/.local/state/banger/ssh/id_ed25519. Accepts absolute paths or~/-anchored paths;~/fooexpands against$HOME. Relative paths are rejected at config load.firecracker_bin— override the auto-resolvedPATHlookup.
Full key list in internal/config/config.go.
vm_defaults — sizing for new VMs
Every vm run / vm create prints a spec: line up front showing
the vCPU, RAM, and disk the VM will get. When the flags aren't set,
those values come from:
[vm_defaults]in config (if present, wins).- Host-derived heuristics (roughly:
cpus/4capped at 4,ram/8capped at 8 GiB, 8 GiB disk). - Built-in constants (floor).
banger doctor prints the effective defaults with provenance.
[vm_defaults]
vcpu = 4
memory_mib = 4096
disk_size = "16G"
All keys optional — omit whichever you want banger to decide.
file_sync — host → guest file copies
[[file_sync]]
host = "~/.aws" # whole directory, recursive
guest = "~/.aws"
[[file_sync]]
host = "~/.config/gh/hosts.yml"
guest = "~/.config/gh/hosts.yml"
[[file_sync]]
host = "~/bin/my-script"
guest = "~/bin/my-script"
mode = "0755" # optional; default 0600 for files
Runs at vm create time. Each entry copies host → guest onto
the VM's work disk (mounted at /root in the guest). Guest paths
must live under ~/ or /root/.... Host paths must live under the
installed owner's home directory; ~/... is the intended form, and
absolute paths are accepted only when they still point inside that
home. Default is no entries — add the ones you want. A top-level
symlink is followed only when its resolved target stays inside the
owner home. Symlinks encountered while recursing into a synced
directory are skipped with a warning — they'd otherwise leak files
from outside the named tree (e.g. a symlink inside ~/.aws pointing
to an unrelated credential dir).
Advanced
The common path is vm run. Power-user flows (vm create, OCI pull
for arbitrary images, image register, manual workspace prepare) are
documented in docs/advanced.md.
Security
Guest VMs are single-user development sandboxes, not multi-tenant servers. Each guest's sshd is configured with:
PermitRootLogin prohibit-password
PubkeyAuthentication yes
PasswordAuthentication no
KbdInteractiveAuthentication no
AuthorizedKeysFile /root/.ssh/authorized_keys
The host SSH key is the only authentication mechanism. StrictModes
is on (sshd's default); banger normalises /root, /root/.ssh, and
authorized_keys perms at provisioning time so the default passes.
VMs are reachable only through the host bridge network
(172.16.0.0/24 by default). Do not expose the bridge interface or
guest IPs to an untrusted network.
Further reading
docs/dns-routing.md— resolving<vm>.vmhostnames from the host.docs/image-catalog.md— bundle format and publishing.docs/kernel-catalog.md— kernel bundles.docs/oci-import.md— pulling arbitrary OCI images.docs/advanced.md— power-user flows.