The CLI carried a full second copy of the workspace import
implementation that `vm run` never actually used:
- importVMRunRepoToGuest (no callers — the live flow calls the
daemon's PrepareVMWorkspace RPC instead)
- prepareVMRunRepoCopy, vmRunCheckoutCommit, vmRunCheckoutScript,
gitFileURL, runHostCommand (all reachable only from the dead
importVMRunRepoToGuest)
Plus a duplicated repo-inspection surface that shadowed the
daemon's:
- inspectVMRunRepo ran every git query the daemon re-ran during
workspace.prepare (HEAD, branch, identity, origin, overlay list)
- gitOutput / gitTrimmedOutput / gitResolvedConfigValue /
parseNullSeparatedOutput / listSubmodules / listOverlayPaths /
resolveVMRunSourcePath — all identical to the exported
workspace.* versions
- vmRunRepoSpec — same fields as workspace.RepoSpec
Replaced with a single minimal preflight:
func vmRunPreflightRepo(ctx, rawPath) (absPath, err error)
The preflight only checks what the user can fix locally before
banger creates a VM (path exists, sits in a non-bare git repo, no
submodules). The daemon's workspace.prepare RPC does the full
inspection — and returns RepoRoot + RepoName in the response, which
the CLI now threads into the tooling harness instead of computing
them a second time.
Signature changes:
runVMRun(ctx, ..., *vmRunRepo, ...) // was: *vmRunRepoSpec
startVMRunToolingHarness(ctx, client, repoRoot, repoName, progress)
// was: (ctx, client, spec, progress)
vmRunToolingHarnessScript(plan) // was: (spec, plan)
vmRunToolingHarnessLaunchScript(repoName) // was: (spec)
Tests: the CLI-side git-inspection tests are replaced by a single
TestVMRunPreflightRejectsSubmodules that exercises the preflight.
Everything else (tooling harness script, progress renderer, SSH args,
runVMRun flows) keeps working. The shallow-copy / checkout-script
tests are gone — that code now lives only in
internal/daemon/workspace and is tested there.
Also fixed a latent bug the refactor exposed: vm run's --from flag
defaults to "HEAD", which the daemon reads as "from without branch"
and rejects. CLI now scrubs fromRef when branchName is empty.
Live verified: `banger vm run --name X . -- cmd` boots, workspace
materialises at /root/repo with matching HEAD, exit code propagates.
|
||
|---|---|---|
| cmd | ||
| configs | ||
| docs | ||
| images/golden | ||
| internal | ||
| scripts | ||
| .gitignore | ||
| AGENTS.md | ||
| go.mod | ||
| go.sum | ||
| LICENSE | ||
| Makefile | ||
| README.md | ||
banger
One-command development sandboxes on Firecracker microVMs.
Quick start
make install
banger vm run --name sandbox
That's it. banger vm run auto-pulls the default golden image (Debian
bookworm with systemd, sshd, Docker CE, git, jq, mise, and the usual
dev tools) and kernel, creates a VM, starts it, and drops you into
an interactive ssh session. First run takes a couple minutes (bundle
download); subsequent vm runs are seconds.
Requirements
- Linux with
/dev/kvm sudo- Firecracker on
PATH, orfirecracker_binset in config - host tools checked by
banger doctor
Build + install
make install
Installs banger (CLI), bangerd (daemon, auto-starts on first
CLI call), and banger-vsock-agent (companion, under
$PREFIX/lib/banger/).
To remove the binaries (and stop the daemon):
make uninstall
User data stays in place — the target prints the paths so you can
rm -rf them if you want a full purge:
~/.config/banger/— config, managed SSH keys~/.local/state/banger/— VM records, rootfs images, kernels, daemon DB/log~/.cache/banger/— OCI layer cache
Shell completion
banger ships completion scripts for bash, zsh, fish, and
powershell. Tab-completion covers subcommands, flags, and live
resource names (VM, image, kernel, session) looked up from the
daemon. With the daemon down, resource completion silently
returns nothing — no file-completion fallback.
# bash (system-wide)
banger completion bash | sudo tee /etc/bash_completion.d/banger
# zsh (user-local; ~/.zfunc must be on fpath)
banger completion zsh > ~/.zfunc/_banger
# fish
banger completion fish > ~/.config/fish/completions/banger.fish
banger completion --help shows the shell-specific loading
recipes.
vm run
One command, four common shapes:
banger vm run # bare sandbox — drops into ssh
banger vm run ./repo # workspace at /root/repo — drops into ssh
banger vm run ./repo -- make test # workspace + run command, exits with its status
banger vm run --rm -- script.sh # ephemeral: VM is deleted on exit
- Bare mode gives you a clean shell.
- Workspace mode (path given) copies the repo's tracked + untracked
non-ignored files into
/root/repoand kicks off a best-effortmisetooling bootstrap from the repo's.mise.toml/.tool-versions. Log:/root/.cache/banger/vm-run-tooling-<repo>.log. - Command mode (
-- <cmd>) runs the command in the guest; exit code propagates throughbanger.
Disconnecting from an interactive session leaves the VM running. Use
vm stop / vm delete to clean up — or pass --rm so the VM
auto-deletes once the session / command exits.
--branch and --from apply only to workspace mode. --rm skips
the delete when the initial ssh wait times out, so a wedged sshd
leaves the VM alive for banger vm logs inspection.
Hostnames: reaching <vm>.vm
banger's daemon runs a DNS server for the .vm zone. With host-side
DNS routing you can ssh root@sandbox.vm or curl http://sandbox.vm:3000 from anywhere on the host — no copy-pasting
guest IPs. On systemd-resolved hosts this is auto-wired; everywhere
else there's a short recipe. See
docs/dns-routing.md.
Image catalog
banger image pull <name> fetches a pre-built bundle from the
embedded catalog. vm run calls this for you on demand.
Today's catalog:
| Name | What it is |
|---|---|
debian-bookworm |
Debian 12 slim + sshd + docker + dev tools |
See docs/image-catalog.md for the bundle
format and how to publish a new entry.
Config
Config lives at ~/.config/banger/config.toml. All keys optional.
Most commonly set:
default_image_name— image used when--imageis omitted (defaultdebian-bookworm, auto-pulled from the catalog if not local).ssh_key_path— host SSH key. If unset, banger creates~/.config/banger/ssh/id_ed25519.firecracker_bin— override the auto-resolvedPATHlookup.
Full key list in internal/config/config.go.
vm_defaults — sizing for new VMs
Every vm run / vm create prints a spec: line up front showing
the vCPU, RAM, and disk the VM will get. When the flags aren't set,
those values come from:
[vm_defaults]in config (if present, wins).- Host-derived heuristics (roughly:
cpus/4capped at 4,ram/8capped at 8 GiB, 8 GiB disk). - Built-in constants (floor).
banger doctor prints the effective defaults with provenance.
[vm_defaults]
vcpu = 4
memory_mib = 4096
disk_size = "16G"
All keys optional — omit whichever you want banger to decide.
file_sync — host → guest file copies
[[file_sync]]
host = "~/.aws" # whole directory, recursive
guest = "~/.aws"
[[file_sync]]
host = "~/.config/gh/hosts.yml"
guest = "~/.config/gh/hosts.yml"
[[file_sync]]
host = "~/bin/my-script"
guest = "~/bin/my-script"
mode = "0755" # optional; default 0600 for files
Runs at vm create time. Each entry copies host → guest onto
the VM's work disk (mounted at /root in the guest). Guest paths
must live under ~/ or /root/.... Default is no entries — add the
ones you want.
Advanced
The common path is vm run. Power-user flows (vm create, OCI pull
for arbitrary images, image register, long-lived sessions) are
documented in docs/advanced.md.
Security
Guest VMs are single-user development sandboxes, not multi-tenant servers. Each guest's sshd is configured with:
PermitRootLogin prohibit-password
PubkeyAuthentication yes
PasswordAuthentication no
KbdInteractiveAuthentication no
AuthorizedKeysFile /root/.ssh/authorized_keys
The host SSH key is the only authentication mechanism. StrictModes
is on (sshd's default); banger normalises /root, /root/.ssh, and
authorized_keys perms at provisioning time so the default passes.
VMs are reachable only through the host bridge network
(172.16.0.0/24 by default). Do not expose the bridge interface or
guest IPs to an untrusted network.
Further reading
docs/dns-routing.md— resolving<vm>.vmhostnames from the host.docs/image-catalog.md— bundle format and publishing.docs/kernel-catalog.md— kernel bundles.docs/oci-import.md— pulling arbitrary OCI images.docs/advanced.md— power-user flows.