One-command development sandboxes on Firecracker microVMs. https://git.thaloco.com/thaloco/banger/
Find a file
Thales Maciel d73efe6fbc
firecracker: drop sudo sh -c, race chown against SDK probe in Go
Replace the shell-string launcher in buildProcessRunner with a direct
exec.Command. The previous sh -c wrapper relied on shellQuote escaping
for every MachineConfig field that flowed into the launch script; any
future field that ever carried an attacker-controlled value would have
become RCE-as-root. The new path passes binary path and flags as
separate argv entries, so there is no shell to interpret anything.

The wrapper also did two things the shell can no longer do for us:

  1. umask 077 — moved to syscall.Umask in cmd/bangerd/main.go so every
     firecracker child (and any other file the daemon creates) inherits
     0600 by default. Single-user dev sandbox state should be private.

  2. chown_watcher — the SDK's HTTP probe inside Machine.Start connects
     to the API socket the moment it appears. Under sudo the socket is
     created root-owned and the daemon's connect(2) gets EACCES, so the
     post-Start EnsureSocketAccess never runs. The shell papered over
     this with a backgrounded chown loop. Replaced by
     fcproc.EnsureSocketAccessForAsync: same race-window guarantee, in
     pure Go, kicked off in LaunchFirecracker right before Start and
     awaited right after.

Tests updated: shell-substring assertions replaced with cmd-arg
assertions, plus a new fcproc test pinning the async chown sequence.
Smoke (full systemd two-service install + KVM scenarios) passes.
2026-04-27 20:14:01 -03:00
cmd firecracker: drop sudo sh -c, race chown against SDK probe in Go 2026-04-27 20:14:01 -03:00
configs Generic kernel + init= boot path for OCI-pulled images 2026-04-16 20:12:56 -03:00
docs docs: add the privilege model document 2026-04-26 12:55:18 -03:00
images/golden supply chain: verify signatures and pins across image + kernel builds 2026-04-21 19:38:13 -03:00
internal firecracker: drop sudo sh -c, race chown against SDK probe in Go 2026-04-27 20:14:01 -03:00
scripts smoke: smol VMs by default + JOBS auto-detects nproc 2026-04-27 17:36:17 -03:00
.gitignore gitignore: exclude the entire build directory 2026-04-26 12:55:11 -03:00
AGENTS.md ssh-config: narrow the legacy-dir cleanup so it can't delete a user key 2026-04-22 16:31:07 -03:00
go.mod Phase 1: imagepull package — pull, flatten, ext4 2026-04-16 17:22:13 -03:00
go.sum Phase 1: imagepull package — pull, flatten, ext4 2026-04-16 17:22:13 -03:00
LICENSE Add LICENSE, update .gitignore, add security note to README 2026-04-14 16:54:33 -03:00
Makefile smoke: smol VMs by default + JOBS auto-detects nproc 2026-04-27 17:36:17 -03:00
mise.toml mise: pin go and shellcheck 2026-04-26 13:11:51 -03:00
README.md daemon: split owner daemon from root helper 2026-04-26 12:43:17 -03:00

banger

One-command development sandboxes on Firecracker microVMs.

Quick start

make build
sudo ./build/bin/banger system install --owner "$USER"
banger vm run --name sandbox

That's it. banger vm run auto-pulls the default golden image (Debian bookworm with systemd, sshd, Docker CE, git, jq, mise, and the usual dev tools) and kernel, creates a VM, starts it, and drops you into an interactive ssh session. First run takes a couple minutes (bundle download); subsequent vm runs are seconds.

Supported host path

banger's supported host/runtime path is:

  • Linux on x86_64 / amd64
  • systemd as the host init/service manager
  • bangerd.service running as the installed owner user
  • bangerd-root.service running as the privileged host helper

Other setups may work with manual adaptation, but they are not the supported operating model for this repo.

Requirements

  • x86_64 / amd64 Linux — arm64 is not supported today. The companion binaries, the published kernel catalog, and the OCI import path all assume linux/amd64. banger doctor surfaces this as a failing check on other architectures.
  • systemd on the host — this is the supported service-management path. banger's supported install/run model is the owner-user bangerd.service plus the privileged bangerd-root.service installed by banger system install.
  • /dev/kvm
  • sudo for the install/admin commands (system install, system restart, system uninstall)
  • Firecracker on PATH, or firecracker_bin set in config
  • host tools checked by banger doctor

Build + install

make build
sudo ./build/bin/banger system install --owner "$USER"

This installs two systemd units, copies the current banger, bangerd, and banger-vsock-agent binaries into /usr/local, writes install metadata under /etc/banger, and starts both services:

  • bangerd.service runs as the configured owner user and exposes the public CLI socket at /run/banger/bangerd.sock.
  • bangerd-root.service runs as root and handles the narrow set of privileged host operations over the private helper socket at /run/banger-root/bangerd-root.sock.

After that, normal daily commands such as banger vm run and banger image pull are unprivileged.

This systemd service flow is the supported path. If you're not on a host that can run both services, you're outside the supported host model even if some pieces happen to work.

The split matters:

  • bangerd.service runs as the owner user, keeps its writable state in /var/lib/banger, /var/cache/banger, and /run/banger, and sees the owner home read-only.
  • bangerd-root.service is the only process that keeps elevated host capabilities, and that capability set is limited to the host-kernel primitives banger actually uses (CAP_CHOWN, CAP_SYS_ADMIN, CAP_NET_ADMIN).

To inspect or refresh the services:

banger system status
sudo banger system restart

To remove the system services:

sudo banger system uninstall

Add --purge if you also want to remove system-owned VM/image/cache state under /var/lib/banger, /var/cache/banger, /run/banger, and /run/banger-root. User config stays in place under your home directory:

  • ~/.config/banger/ — config, optional ssh_config
  • ~/.local/state/banger/ssh/ — user SSH key + known_hosts

Shell completion

banger ships completion scripts for bash, zsh, fish, and powershell. Tab-completion covers subcommands, flags, and live resource names (VM, image, kernel) looked up from the installed services. With the services down, resource completion silently returns nothing — no file-completion fallback.

# bash (system-wide)
banger completion bash | sudo tee /etc/bash_completion.d/banger

# zsh (user-local; ~/.zfunc must be on fpath)
banger completion zsh > ~/.zfunc/_banger

# fish
banger completion fish > ~/.config/fish/completions/banger.fish

banger completion --help shows the shell-specific loading recipes.

vm run

One command, four common shapes:

banger vm run                          # bare sandbox — drops into ssh
banger vm run ./repo                   # workspace at /root/repo — drops into ssh
banger vm run ./repo -- make test      # workspace + run command, exits with its status
banger vm run --rm -- script.sh        # ephemeral: VM is deleted on exit
  • Bare mode gives you a clean shell.
  • Workspace mode (path given) copies the repo's git-tracked files into /root/repo and kicks off a best-effort mise tooling bootstrap from the repo's .mise.toml / .tool-versions. Log: /root/.cache/banger/vm-run-tooling-<repo>.log. Untracked files (including local .env, scratch notes, credentials that aren't gitignored) are skipped by default — pass --include-untracked to also ship them. Pass --dry-run to print the exact file list and exit without creating a VM.
  • Command mode (-- <cmd>) runs the command in the guest; exit code propagates through banger.

Disconnecting from an interactive session leaves the VM running. Use vm stop / vm delete to clean up — or pass --rm so the VM auto-deletes once the session / command exits.

--branch, --from, --include-untracked, and --dry-run apply only to workspace mode. --rm skips the delete when the initial ssh wait times out, so a wedged sshd leaves the VM alive for banger vm logs inspection.

Hostnames: reaching <vm>.vm

banger's owner daemon runs a DNS server for the .vm zone. With host-side DNS routing you can curl http://sandbox.vm:3000 from anywhere on the host — no copy-pasting guest IPs. On systemd-resolved hosts the owner daemon asks the root helper to auto-wire this and that is the supported path. Everywhere else there's a best-effort manual recipe. See docs/dns-routing.md.

Optional: ssh <name>.vm shortcut

banger vm ssh <name> works out of the box. If you'd also like plain ssh sandbox.vm from any terminal (using banger's key + known_hosts), opt in:

banger ssh-config --install    # adds `Include ~/.config/banger/ssh_config`
                               # to ~/.ssh/config in a marker-fenced block
banger ssh-config --uninstall  # reverse it
banger ssh-config              # show the include line to paste manually

banger never touches ~/.ssh/config on its own — the daemon keeps its own known_hosts under /var/lib/banger/ssh/known_hosts, while banger ssh-config keeps the user-facing file fresh at ~/.config/banger/ssh_config; whether and how it's pulled into your SSH config is up to you.

Image catalog

banger image pull <name> fetches a pre-built bundle from the embedded catalog. vm run calls this for you on demand.

Today's catalog:

Name What it is
debian-bookworm Debian 12 slim + sshd + docker + dev tools

See docs/image-catalog.md for the bundle format and how to publish a new entry.

Config

Config lives at ~/.config/banger/config.toml. All keys optional.

Most commonly set:

  • default_image_name — image used when --image is omitted (default debian-bookworm, auto-pulled from the catalog if not local).
  • ssh_key_path — host SSH key. If unset, banger creates ~/.local/state/banger/ssh/id_ed25519. Accepts absolute paths or ~/-anchored paths; ~/foo expands against $HOME. Relative paths are rejected at config load.
  • firecracker_bin — override the auto-resolved PATH lookup.

Full key list in internal/config/config.go.

vm_defaults — sizing for new VMs

Every vm run / vm create prints a spec: line up front showing the vCPU, RAM, and disk the VM will get. When the flags aren't set, those values come from:

  1. [vm_defaults] in config (if present, wins).
  2. Host-derived heuristics (roughly: cpus/4 capped at 4, ram/8 capped at 8 GiB, 8 GiB disk).
  3. Built-in constants (floor).

banger doctor prints the effective defaults with provenance.

[vm_defaults]
vcpu = 4
memory_mib = 4096
disk_size = "16G"

All keys optional — omit whichever you want banger to decide.

file_sync — host → guest file copies

[[file_sync]]
host = "~/.aws"          # whole directory, recursive
guest = "~/.aws"

[[file_sync]]
host = "~/.config/gh/hosts.yml"
guest = "~/.config/gh/hosts.yml"

[[file_sync]]
host = "~/bin/my-script"
guest = "~/bin/my-script"
mode = "0755"            # optional; default 0600 for files

Runs at vm create time. Each entry copies hostguest onto the VM's work disk (mounted at /root in the guest). Guest paths must live under ~/ or /root/.... Host paths must live under the installed owner's home directory; ~/... is the intended form, and absolute paths are accepted only when they still point inside that home. Default is no entries — add the ones you want. A top-level symlink is followed only when its resolved target stays inside the owner home. Symlinks encountered while recursing into a synced directory are skipped with a warning — they'd otherwise leak files from outside the named tree (e.g. a symlink inside ~/.aws pointing to an unrelated credential dir).

Advanced

The common path is vm run. Power-user flows (vm create, OCI pull for arbitrary images, image register, manual workspace prepare) are documented in docs/advanced.md.

Security

Guest VMs are single-user development sandboxes, not multi-tenant servers. Each guest's sshd is configured with:

PermitRootLogin prohibit-password
PubkeyAuthentication yes
PasswordAuthentication no
KbdInteractiveAuthentication no
AuthorizedKeysFile /root/.ssh/authorized_keys

The host SSH key is the only authentication mechanism. StrictModes is on (sshd's default); banger normalises /root, /root/.ssh, and authorized_keys perms at provisioning time so the default passes.

VMs are reachable only through the host bridge network (172.16.0.0/24 by default). Do not expose the bridge interface or guest IPs to an untrusted network.

Further reading