Commit graph

5 commits

Author SHA1 Message Date
4004ce2e7e
imagecat,kernelcat: bound staged download, hash before extract
Both Fetch flows previously streamed resp.Body straight into
zstd → tar → on-disk extractor with the SHA256 check tacked on at
the END. A bad mirror or an attacker that's compromised the catalog
host could ship a multi-gigabyte tarball, watch banger expand it to
disk, and only THEN see the helpful "sha256 mismatch" message —
having already filled the host filesystem.

Reorder the operations: stage the compressed tarball to a temp file
under the destination directory through an io.LimitReader (cap +1
bytes), hash on the way in, refuse to decompress if either the cap
trips or the SHA mismatches. Worst-case disk use is bounded by the
cap, not by the source.

Cap is exposed as a package var (MaxFetchedBundleBytes,
MaxFetchedKernelBytes) so callers can tune per-deployment and tests
can squeeze it down to provoke the rejection. Default 8 GiB —
generous enough for a 4 GiB rootfs (which compresses to ~1-2 GiB),
tight enough to make a "fill the host disk" attack expensive.

The temp file lives in the destination dir so extraction stays on
the same filesystem and we don't pay for cross-FS rename. defer
os.Remove cleans up; the existing per-package cleanup() handler
still removes any partial extraction on hash mismatch / extraction
failure.

Tests: each package gets a TestFetchRejectsOversizedTarballBefore
Extraction that sets the cap to 64 bytes, points Fetch at a multi-KB
tarball, and asserts (a) error mentions "cap", (b) destination dir
is left clean (no leaked rootfs / manifest / kernel tree). All
existing tests still pass — happy path, hash mismatch, missing
files, path traversal, HTTP error, etc.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 16:09:55 -03:00
75baf2e415
publish-golden-image: content-addressed tarball names
Embed the sha256 prefix in the uploaded filename so every rebuild
lives at a unique URL. Cloudflare's edge cache (and any similar CDN
in front of R2) can never serve stale bytes for the URL the catalog
points at. The R2 console offers no per-URL purge for this bucket
layout, so making the URL itself content-addressed is the only
durable fix.

Also republishes the debian-bookworm catalog entry with the new
filename.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 15:26:57 -03:00
81a27d6648
imagecat: publish debian-bookworm bundle with boot fixes
End-to-end verified:
  banger image pull debian-bookworm
  banger vm run --image debian-bookworm --name goldenvm
boots through multi-user.target, sshd starts, and vm run drops into
an interactive ssh session.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:59:01 -03:00
ab5627aec2
imagecat: publish debian-bookworm golden image
First entry in the image catalog. Verified end-to-end:
  - https://images.thaloco.com/debian-bookworm-x86_64.tar.zst reachable
  - sha256 071495e6... matches
  - bundle unpacks to rootfs.ext4 (4 GiB) + manifest.json with the
    expected name/distro/arch/kernel_ref.

publish-golden-image.sh tweaks:
  - default RCLONE_REMOTE from 'r2' to 'banger-images' (matches the
    rclone config actually in use here).
  - rclone copyto now passes --s3-no-check-bucket and --no-check-dest
    so scoped R2 tokens without HeadBucket/HeadObject permission
    still upload cleanly.

To use: restart bangerd so it picks up the new embedded catalog,
then `banger image pull debian-bookworm`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:25:42 -03:00
3d9ae624b1
imagecat: catalog + fetch for banger image bundles
New package mirroring `kernelcat`: catalog + SHA256-verified HTTP
fetch of `.tar.zst` bundles that contain rootfs.ext4 + manifest.json.
Mounted empty (version:1, entries:[]) so nothing is pullable via the
bundle path yet; wiring into `banger image pull` lands in a later
phase.

- catalog.go: Catalog/CatEntry, LoadEmbedded, ParseCatalog, Lookup,
  ValidateName.
- fetch.go: Fetch(ctx, client, destDir, entry) downloads the bundle,
  verifies sha256, extracts exactly rootfs.ext4 and manifest.json
  into destDir, returns the parsed manifest. Rejects unexpected tar
  entries, unsafe paths, non-regular files, and cleans up partial
  writes on failure.
- Thirteen unit tests (happy path + every failure mode).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 15:11:52 -03:00