validateManagedPath was textual-only: filepath.Clean + dest-prefix
match. That stopped `..` escapes but not the symlink-bypass attack
that motivated this fix — a daemon-UID attacker can write into
StateDir/RuntimeDir (it's their UID), so they can plant
`<StateDir>/redirect -> /etc` and any helper RPC that then operates
on `<StateDir>/redirect/...` resolves through the symlink at the
kernel and lands at /etc/... on the host.
Concretely the leaks this closed:
* priv.create_dm_snapshot: rootfs/cow paths fed to losetup —
losetup follows the symlink and attaches a host block device.
* priv.launch_firecracker: kernel/initrd paths hard-linked into
the chroot via `ln -f` — link(2) on Linux follows source
symlinks, hard-linking host files into the jail.
* priv.read_ext4_file / priv.write_ext4_files: image paths fed
to debugfs / e2cp as root.
* validateLaunchDrivePath: drive paths mknod'd or hard-linked.
* validateJailerOpts: chroot base.
Fix: after the existing prefix match, walk every component below
the matched root with Lstat. Any existing symlink — leaf or
intermediate — fails the validator. ENOENT is tolerated because
several callers pass paths firecracker/the helper materialise
later (sockets, log files, kernel hard-link targets); whoever
materialises them goes through the same validation when the
helper-side primitive runs.
Subsumes most of validateNotSymlink's coverage but the explicit
call sites (methodEnsureSocketAccess, methodCleanupJailerChroot)
keep their belt-and-braces check — those paths must EXIST and
not be symlinks, which validateNotSymlink enforces strictly while
the broadened validateManagedPath tolerates ENOENT.
Race-free in practice: helper RPCs are short and the validator
fires on the same kernel state the next syscall sees. The helper
loop processes RPCs serially per-connection, and the validator
plus the syscall both run as root within microseconds of each
other.
Four new tests cover symlink leaf, symlink intermediate, missing
leaf (must pass), and the plain happy path. Smoke at JOBS=4 still
green — every legitimate daemon-supplied path passes the walk.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .githooks | ||
| cmd | ||
| configs | ||
| docs | ||
| images/golden | ||
| internal | ||
| scripts | ||
| .gitignore | ||
| AGENTS.md | ||
| go.mod | ||
| go.sum | ||
| LICENSE | ||
| Makefile | ||
| mise.toml | ||
| README.md | ||
banger
One-command development sandboxes on Firecracker microVMs.
Quick start
make build
sudo ./build/bin/banger system install --owner "$USER"
banger vm run --name sandbox
That's it. banger vm run auto-pulls the default golden image (Debian
bookworm with systemd, sshd, Docker CE, git, jq, mise, and the usual
dev tools) and kernel, creates a VM, starts it, and drops you into
an interactive ssh session. First run takes a couple minutes (bundle
download); subsequent vm runs are seconds.
Supported host path
banger's supported host/runtime path is:
- Linux on
x86_64 / amd64 systemdas the host init/service managerbangerd.servicerunning as the installed owner userbangerd-root.servicerunning as the privileged host helper
Other setups may work with manual adaptation, but they are not the supported operating model for this repo.
Requirements
- x86_64 / amd64 Linux — arm64 is not supported today. The companion
binaries, the published kernel catalog, and the OCI import path all
assume
linux/amd64.banger doctorsurfaces this as a failing check on other architectures. - systemd on the host — this is the supported service-management
path. banger's supported install/run model is the owner-user
bangerd.serviceplus the privilegedbangerd-root.serviceinstalled bybanger system install. /dev/kvmsudofor the install/admin commands (system install,system restart,system uninstall)- Firecracker on
PATH, orfirecracker_binset in config - host tools checked by
banger doctor
Build + install
make build
sudo ./build/bin/banger system install --owner "$USER"
This installs two systemd units, copies the current banger,
bangerd, and banger-vsock-agent binaries into /usr/local, writes
install metadata under /etc/banger, and starts both services:
bangerd.serviceruns as the configured owner user and exposes the public CLI socket at/run/banger/bangerd.sock.bangerd-root.serviceruns as root and handles the narrow set of privileged host operations over the private helper socket at/run/banger-root/bangerd-root.sock.
After that, normal daily commands such as banger vm run and
banger image pull are unprivileged.
This systemd service flow is the supported path. If you're not on a
host that can run both services, you're outside the supported host
model even if some pieces happen to work.
The split matters:
bangerd.serviceruns as the owner user, keeps its writable state in/var/lib/banger,/var/cache/banger, and/run/banger, and sees the owner home read-only.bangerd-root.serviceis the only process that keeps elevated host capabilities, and that capability set is limited to the host-kernel primitives banger actually uses (CAP_CHOWN,CAP_SYS_ADMIN,CAP_NET_ADMIN).
To inspect or refresh the services:
banger system status
sudo banger system restart
To remove the system services:
sudo banger system uninstall
Add --purge if you also want to remove system-owned VM/image/cache
state under /var/lib/banger, /var/cache/banger, /run/banger, and
/run/banger-root. User config stays in place under your home
directory:
~/.config/banger/— config, optionalssh_config~/.local/state/banger/ssh/— user SSH key + known_hosts
Shell completion
banger ships completion scripts for bash, zsh, fish, and
powershell. Tab-completion covers subcommands, flags, and live
resource names (VM, image, kernel) looked up from the installed
services. With the services down, resource completion silently
returns nothing — no file-completion fallback.
# bash (system-wide)
banger completion bash | sudo tee /etc/bash_completion.d/banger
# zsh (user-local; ~/.zfunc must be on fpath)
banger completion zsh > ~/.zfunc/_banger
# fish
banger completion fish > ~/.config/fish/completions/banger.fish
banger completion --help shows the shell-specific loading
recipes.
vm run
One command, four common shapes:
banger vm run # bare sandbox — drops into ssh
banger vm run ./repo # workspace at /root/repo — drops into ssh
banger vm run ./repo -- make test # workspace + run command, exits with its status
banger vm run --rm -- script.sh # ephemeral: VM is deleted on exit
- Bare mode gives you a clean shell.
- Workspace mode (path given) copies the repo's git-tracked files
into
/root/repoand kicks off a best-effortmisetooling bootstrap from the repo's.mise.toml/.tool-versions. Log:/root/.cache/banger/vm-run-tooling-<repo>.log. Untracked files (including local.env, scratch notes, credentials that aren't gitignored) are skipped by default — pass--include-untrackedto also ship them. Pass--dry-runto print the exact file list and exit without creating a VM. - Command mode (
-- <cmd>) runs the command in the guest; exit code propagates throughbanger.
Disconnecting from an interactive session leaves the VM running. Use
vm stop / vm delete to clean up — or pass --rm so the VM
auto-deletes once the session / command exits.
--branch, --from, --include-untracked, and --dry-run apply
only to workspace mode. --rm skips the delete when the initial ssh
wait times out, so a wedged sshd leaves the VM alive for banger vm logs inspection.
Hostnames: reaching <vm>.vm
banger's owner daemon runs a DNS server for the .vm zone. With
host-side DNS routing you can curl http://sandbox.vm:3000 from
anywhere on the host — no copy-pasting guest IPs. On
systemd-resolved hosts the owner daemon asks the root helper to
auto-wire this and that is the supported path. Everywhere else
there's a best-effort manual recipe. See
docs/dns-routing.md.
Optional: ssh <name>.vm shortcut
banger vm ssh <name> works out of the box. If you'd also like plain
ssh sandbox.vm from any terminal (using banger's key + known_hosts),
opt in:
banger ssh-config --install # adds `Include ~/.config/banger/ssh_config`
# to ~/.ssh/config in a marker-fenced block
banger ssh-config --uninstall # reverse it
banger ssh-config # show the include line to paste manually
banger never touches ~/.ssh/config on its own — the daemon keeps its
own known_hosts under /var/lib/banger/ssh/known_hosts, while
banger ssh-config keeps the user-facing file fresh at
~/.config/banger/ssh_config; whether and how it's
pulled into your SSH config is up to you.
Image catalog
banger image pull <name> fetches a pre-built bundle from the
embedded catalog. vm run calls this for you on demand.
Today's catalog:
| Name | What it is |
|---|---|
debian-bookworm |
Debian 12 slim + sshd + docker + dev tools |
See docs/image-catalog.md for the bundle
format and how to publish a new entry.
Config
Config lives at ~/.config/banger/config.toml. All keys optional.
Most commonly set:
default_image_name— image used when--imageis omitted (defaultdebian-bookworm, auto-pulled from the catalog if not local).ssh_key_path— host SSH key. If unset, banger creates~/.local/state/banger/ssh/id_ed25519. Accepts absolute paths or~/-anchored paths;~/fooexpands against$HOME. Relative paths are rejected at config load.firecracker_bin— override the auto-resolvedPATHlookup.
Full key reference: docs/config.md.
vm_defaults — sizing for new VMs
Every vm run / vm create prints a spec: line up front showing
the vCPU, RAM, and disk the VM will get. When the flags aren't set,
those values come from:
[vm_defaults]in config (if present, wins).- Host-derived heuristics (roughly:
cpus/4capped at 4,ram/8capped at 8 GiB, 8 GiB disk). - Built-in constants (floor).
banger doctor prints the effective defaults with provenance.
[vm_defaults]
vcpu = 4
memory_mib = 4096
disk_size = "16G"
All keys optional — omit whichever you want banger to decide.
file_sync — host → guest file copies
[[file_sync]]
host = "~/.aws" # whole directory, recursive
guest = "~/.aws"
[[file_sync]]
host = "~/.config/gh/hosts.yml"
guest = "~/.config/gh/hosts.yml"
[[file_sync]]
host = "~/bin/my-script"
guest = "~/bin/my-script"
mode = "0755" # optional; default 0600 for files
Runs at vm create time. Each entry copies host → guest onto
the VM's work disk (mounted at /root in the guest). Guest paths
must live under ~/ or /root/.... Host paths must live under the
installed owner's home directory; ~/... is the intended form, and
absolute paths are accepted only when they still point inside that
home. Default is no entries — add the ones you want. A top-level
symlink is followed only when its resolved target stays inside the
owner home. Symlinks encountered while recursing into a synced
directory are skipped with a warning — they'd otherwise leak files
from outside the named tree (e.g. a symlink inside ~/.aws pointing
to an unrelated credential dir).
Advanced
The common path is vm run. Power-user flows (vm create, OCI pull
for arbitrary images, image register, manual workspace prepare) are
documented in docs/advanced.md.
Security
Guest VMs are single-user development sandboxes, not multi-tenant servers. Each guest's sshd is configured with:
PermitRootLogin prohibit-password
PubkeyAuthentication yes
PasswordAuthentication no
KbdInteractiveAuthentication no
AuthorizedKeysFile /root/.ssh/authorized_keys
The host SSH key is the only authentication mechanism. StrictModes
is on (sshd's default); banger normalises /root, /root/.ssh, and
authorized_keys perms at provisioning time so the default passes.
VMs are reachable only through the host bridge network
(172.16.0.0/24 by default). Do not expose the bridge interface or
guest IPs to an untrusted network.
Further reading
docs/config.md— full config key reference.docs/dns-routing.md— resolving<vm>.vmhostnames from the host.docs/image-catalog.md— bundle format and publishing.docs/kernel-catalog.md— kernel bundles.docs/oci-import.md— pulling arbitrary OCI images.docs/advanced.md— power-user flows.