Reorganize the source checkout layout

Separate tracked source from generated artifacts so the repo root stops accumulating helper scripts, manifests, and local runtime outputs.

Move manual shell entrypoints under scripts/, manifests under config/, and the Firecracker API reference under docs/reference/. Make build and runtimebundle now target build/bin, build/runtime, and build/dist as the canonical source-checkout paths.

Update runtime discovery, helper scripts, tests, and docs to follow the new layout while keeping legacy source-checkout runtime fallbacks for existing local bundles during migration.

Validated with bash -n on the moved scripts, make build, and GOCACHE=/tmp/banger-gocache go test ./....
This commit is contained in:
Thales Maciel 2026-03-21 17:22:57 -03:00
parent 2362d0ae39
commit 01c7cb5e65
No known key found for this signature in database
GPG key ID: 33112E6833C34679
23 changed files with 296 additions and 186 deletions

102
README.md
View file

@ -17,7 +17,7 @@ assuming one workstation layout.
## Runtime Bundle
Runtime artifacts are no longer tracked directly in Git. Source checkouts use a
generated `./runtime/` bundle, while installed binaries use
generated `./build/runtime/` bundle, while installed binaries use
`$(prefix)/lib/banger`.
The bundle contains:
@ -34,30 +34,30 @@ The bundle contains:
- the helper scripts used by manual customization and installs
Bootstrap a source checkout from a local or published runtime archive. The
checked-in [`runtime-bundle.toml`](/home/thales/projects/personal/banger/runtime-bundle.toml)
checked-in [`config/runtime-bundle.toml`](/home/thales/projects/personal/banger/config/runtime-bundle.toml)
is a template and intentionally ships with empty `url` and `sha256`.
If you need to create a local archive first, do that from a checkout or machine
that already has a populated `./runtime/` tree:
that already has a populated `./build/runtime/` tree:
```bash
make runtime-package
cp dist/banger-runtime.tar.gz /path/to/fresh-checkout/dist/
cp build/dist/banger-runtime.tar.gz /path/to/fresh-checkout/build/dist/
```
In the fresh checkout:
```bash
cp runtime-bundle.toml runtime-bundle.local.toml
cp config/runtime-bundle.toml config/runtime-bundle.local.toml
```
Edit `runtime-bundle.local.toml` to point at the staged archive and checksum:
Edit `config/runtime-bundle.local.toml` to point at the staged archive and checksum:
```toml
url = "./dist/banger-runtime.tar.gz"
url = "./build/dist/banger-runtime.tar.gz"
sha256 = "<sha256 printed by make runtime-package>"
```
Then bootstrap `./runtime/` with the local manifest copy:
Then bootstrap `./build/runtime/` with the local manifest copy:
```bash
make runtime-bundle RUNTIME_MANIFEST=runtime-bundle.local.toml
make runtime-bundle RUNTIME_MANIFEST=config/runtime-bundle.local.toml
```
`url` may be a relative path, absolute path, `file:///...` URL, or HTTP(S)
@ -68,8 +68,19 @@ URL. `make install` will not fetch artifacts for you.
make build
```
Run `make build` after `./runtime/` has been bootstrapped. It also rebuilds the
bundled `banger-vsock-agent` guest helper in `./runtime/`.
Run `make build` after `./build/runtime/` has been bootstrapped. It writes
`./build/bin/banger`, `./build/bin/bangerd`, and refreshes the bundled
`banger-vsock-agent` guest helper in `./build/runtime/`.
Older ignored root artifacts such as `./runtime/`, `./banger`, and `./bangerd`
are no longer the canonical source-checkout layout. Leave them alone if you
still need them, or remove them manually after migrating to `build/`.
If you have confirmed your current images and runtime settings no longer point
at the old checkout-local paths, a one-time cleanup looks like:
```bash
rm -rf ./runtime ./banger ./bangerd
```
Install into `~/.local/bin` by default, with the runtime bundle under
`~/.local/lib/banger`:
@ -178,8 +189,9 @@ State lives under XDG directories:
- runtime socket: `$XDG_RUNTIME_DIR/banger/bangerd.sock`
Installed binaries resolve their runtime bundle from `../lib/banger` relative to
the executable. Source-checkout binaries resolve it from `./runtime` next to the
repo-built `./banger`. You can override either with `runtime_dir` in
the executable. Source-checkout binaries resolve it from `./build/runtime` next
to `./build/bin/banger`, and still fall back to a legacy `./runtime` checkout
bundle if that exists. You can override either with `runtime_dir` in
`~/.config/banger/config.toml` or `BANGER_RUNTIME_DIR`.
Useful config keys:
@ -323,32 +335,32 @@ shell helpers treated as manual workflows rather than architecture drivers.
- Stopping a VM preserves its overlay and work disk.
## Rebuilding The Repo Default Rootfs
`packages.apt` controls the base apt packages baked into rebuilt images,
`config/packages.apt` controls the base apt packages baked into rebuilt images,
including guest tools such as `ss` used by `banger vm ports`.
To rebuild the source-checkout default image in `./runtime/rootfs-docker.ext4`:
To rebuild the source-checkout default image in `./build/runtime/rootfs-docker.ext4`:
```bash
make rootfs
```
That rebuild also regenerates `./runtime/rootfs-docker.work-seed.ext4`, which
That rebuild also regenerates `./build/runtime/rootfs-docker.work-seed.ext4`, which
the daemon uses to speed up future `vm create` calls, and bakes in the default
host-reachable `opencode` server service.
If your runtime bundle does not include `./runtime/rootfs.ext4`, pass an
If your runtime bundle does not include `./build/runtime/rootfs.ext4`, pass an
explicit base image instead:
```bash
./make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4
./scripts/make-rootfs.sh --base-rootfs /path/to/base-rootfs.ext4
```
If the package manifest changed and you want a fresh source-checkout image:
```bash
rm -f ./runtime/rootfs-docker.ext4 ./runtime/rootfs-docker.ext4.packages.sha256
rm -f ./build/runtime/rootfs-docker.ext4 ./build/runtime/rootfs-docker.ext4.packages.sha256
make rootfs
```
`make rootfs` expects a bootstrapped runtime bundle. If `./runtime/rootfs.ext4`
is not available, pass an explicit `--base-rootfs` to `./make-rootfs.sh`.
`make rootfs` expects a bootstrapped runtime bundle. If `./build/runtime/rootfs.ext4`
is not available, pass an explicit `--base-rootfs` to `./scripts/make-rootfs.sh`.
Existing VMs keep using their current image and disks; rebuilds only affect VMs
created from the rebuilt image afterward. Restarting an existing VM is not
enough to pick up guest provisioning changes such as the default `opencode`
@ -363,13 +375,13 @@ make rootfs-void
```
That writes:
- `./runtime/void-kernel/` when `make void-kernel` is used
- `./runtime/rootfs-void.ext4`
- `./runtime/rootfs-void.work-seed.ext4`
- `./build/runtime/void-kernel/` when `make void-kernel` is used
- `./build/runtime/rootfs-void.ext4`
- `./build/runtime/rootfs-void.work-seed.ext4`
This path is intentionally local-only and does not change the default Debian
image flow. `make void-kernel` stages an actual Void `linux6.12` kernel package
under `./runtime/void-kernel/`, including the raw `vmlinuz`, extracted
under `./build/runtime/void-kernel/`, including the raw `vmlinuz`, extracted
Firecracker `vmlinux`, a matching `initramfs`, the matching config, and the
matching modules tree. The initramfs is generated locally with `dracut`
against the downloaded Void sysroot so the kernel, initrd, and modules stay
@ -395,11 +407,11 @@ The builder fetches official static XBPS tools and packages from the Void
mirror during the build. The kernel fetcher and rootfs builder currently
support only `x86_64`.
The package set comes from [`packages.void`](/home/thales/projects/personal/banger/packages.void).
The package set comes from [`config/packages.void`](/home/thales/projects/personal/banger/config/packages.void).
You can override the mirror, size, output path, or kernel package directly:
```bash
./make-void-kernel.sh --kernel-package linux6.12
./make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G
./scripts/make-void-kernel.sh --kernel-package linux6.12
./scripts/make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G
```
The fastest local iteration loop does not require changing your default image
@ -408,8 +420,8 @@ config at all:
make void-kernel
make rootfs-void
make void-register
./banger vm create --image void-exp --name void-dev
./banger vm ssh void-dev
./build/bin/banger vm create --image void-exp --name void-dev
./build/bin/banger vm ssh void-dev
```
Rebuild the staged Void kernel or Void rootfs, then recreate existing
@ -425,7 +437,7 @@ make verify-void
`make void-register` uses the unmanaged image registration path to create or
update a `void-exp` image record in place, so repeated rebuilds do not require
editing `~/.config/banger/config.toml`. It expects a complete staged Void
kernel set under `./runtime/void-kernel/` and points the experimental image at
kernel set under `./build/runtime/void-kernel/` and points the experimental image at
the staged Void `vmlinux`, `initramfs`, and matching modules tree.
There is also a one-step helper target:
@ -453,12 +465,12 @@ and package manifest:
```bash
banger image register \
--name void-exp \
--rootfs ./runtime/rootfs-void.ext4 \
--work-seed ./runtime/rootfs-void.work-seed.ext4 \
--kernel ./runtime/void-kernel/boot/vmlinux-6.12.77_1 \
--initrd ./runtime/void-kernel/boot/initramfs-6.12.77_1.img \
--modules ./runtime/void-kernel/lib/modules/6.12.77_1 \
--packages ./packages.void
--rootfs ./build/runtime/rootfs-void.ext4 \
--work-seed ./build/runtime/rootfs-void.work-seed.ext4 \
--kernel ./build/runtime/void-kernel/boot/vmlinux-6.12.77_1 \
--initrd ./build/runtime/void-kernel/boot/initramfs-6.12.77_1.img \
--modules ./build/runtime/void-kernel/lib/modules/6.12.77_1 \
--packages ./config/packages.void
```
If an unmanaged image with the same name already exists, `image register`
@ -466,17 +478,17 @@ updates it in place so future `vm create --image <name>` calls pick up the new
artifacts immediately.
## Maintaining The Runtime Bundle
The checked-in [`runtime-bundle.toml`](/home/thales/projects/personal/banger/runtime-bundle.toml)
The checked-in [`config/runtime-bundle.toml`](/home/thales/projects/personal/banger/config/runtime-bundle.toml)
is a template. Keep `bundle_metadata` accurate there, but use a separate local
manifest copy when you need concrete `url` and `sha256` values for bootstrap
testing or publication.
Package a local `./runtime/` tree into an archive:
Package a local `./build/runtime/` tree into an archive:
```bash
make runtime-package
```
That writes `dist/banger-runtime.tar.gz` and prints its SHA256 so you can update
That writes `build/dist/banger-runtime.tar.gz` and prints its SHA256 so you can update
a local manifest copy before testing bootstrap changes or publishing the
archive elsewhere.
@ -499,10 +511,10 @@ The benchmark prints JSON with:
## Remaining Shell Helpers
The runtime VM lifecycle is managed through `banger`. The remaining shell scripts are not the primary user interface:
- `customize.sh`: manual reference flow for rootfs customization; `banger image build` is now Go-native, but the script still reads
- `scripts/customize.sh`: manual reference flow for rootfs customization; `banger image build` is now Go-native, but the script still reads
assets from `BANGER_RUNTIME_DIR` and stores transient state under
`BANGER_STATE_DIR`/XDG state
- `make-rootfs.sh`: convenience wrapper for rebuilding `./runtime/rootfs-docker.ext4`
- `interactive.sh`: manual one-off rootfs customization over SSH
- `packages.sh`: shell helper library
- `verify.sh`: smoke test for the Go workflow (`./verify.sh --nat` adds NAT coverage)
- `scripts/make-rootfs.sh`: convenience wrapper for rebuilding `./build/runtime/rootfs-docker.ext4`
- `scripts/interactive.sh`: manual one-off rootfs customization over SSH
- `scripts/lib/packages.sh`: shell helper library
- `scripts/verify.sh`: smoke test for the Go workflow (`./scripts/verify.sh --nat` adds NAT coverage)