Manage image artifacts and show VM create progress
Stop relying on ad hoc rootfs handling by adding image promotion, managed work-seed fingerprint metadata, and lazy self-healing for older managed images after the first create. Rebuild guest images with baked SSH access, a guest NIC bootstrap, and default opencode services, and add the staged Void kernel/initramfs/modules workflow so void-exp uses a matching Void boot stack. Replace the opaque blocking vm.create RPC with a begin/status flow that prints live stages in the CLI while still waiting for vsock health and opencode on guest port 4096. Validate with GOCACHE=/tmp/banger-gocache go test ./... and live void-exp create/delete smoke runs.
This commit is contained in:
parent
9f09b0d25c
commit
30f0c0b54a
37 changed files with 2334 additions and 99 deletions
|
|
@ -12,7 +12,8 @@
|
||||||
- `make build` builds `./banger`, `./bangerd`, and the bundled `./runtime/banger-vsock-agent` guest helper.
|
- `make build` builds `./banger`, `./bangerd`, and the bundled `./runtime/banger-vsock-agent` guest helper.
|
||||||
- `make bench-create` benchmarks `vm create` and first-SSH readiness on the current host.
|
- `make bench-create` benchmarks `vm create` and first-SSH readiness on the current host.
|
||||||
- `make runtime-bundle` bootstraps `./runtime/` from the archive referenced by `RUNTIME_MANIFEST`; the checked-in `runtime-bundle.toml` is only a template.
|
- `make runtime-bundle` bootstraps `./runtime/` from the archive referenced by `RUNTIME_MANIFEST`; the checked-in `runtime-bundle.toml` is only a template.
|
||||||
- `make rootfs-void` builds an experimental local-only `x86_64-glibc` Void rootfs plus work-seed under `./runtime/`; it does not replace the default Debian path or teach `banger image build` about Void.
|
- `make void-kernel` downloads and stages a Void `linux6.12` kernel under `./runtime/void-kernel`, including extracted `vmlinux`, raw `vmlinuz`, a matching generated `initramfs`, config, and matching modules.
|
||||||
|
- `make rootfs-void` builds an experimental local-only `x86_64-glibc` Void rootfs plus work-seed under `./runtime/`; it prefers staged `./runtime/void-kernel` modules when present, but does not replace the default Debian path or teach `banger image build` about Void.
|
||||||
- `make verify-void` registers `void-exp` and runs the normal smoke test against that image.
|
- `make verify-void` registers `void-exp` and runs the normal smoke test against that image.
|
||||||
- `banger` validates required host tools per command and reports actionable missing-tool errors; do not assume one workstation's package set.
|
- `banger` validates required host tools per command and reports actionable missing-tool errors; do not assume one workstation's package set.
|
||||||
- `./banger vm create --name testbox` creates and starts a VM.
|
- `./banger vm create --name testbox` creates and starts a VM.
|
||||||
|
|
@ -34,8 +35,8 @@
|
||||||
- Primary automated coverage is `go test ./...`.
|
- Primary automated coverage is `go test ./...`.
|
||||||
- Manual verification for VM lifecycle changes: `./banger vm create`, confirm SSH access, then stop/delete the VM.
|
- Manual verification for VM lifecycle changes: `./banger vm create`, confirm SSH access, then stop/delete the VM.
|
||||||
- For host-integration changes, run `./banger doctor` as a quick readiness check before the live VM smoke.
|
- For host-integration changes, run `./banger doctor` as a quick readiness check before the live VM smoke.
|
||||||
- Rebuilt images now include `mise`, `opencode`, `tmux-resurrect`/`tmux-continuum` defaults for `root`, and the `banger-vsock-agent` service used by the SSH reminder and guest health-check path; if you change guest provisioning, document whether users need to rebuild `./runtime/rootfs-docker.ext4` or another base image to pick it up.
|
- Rebuilt images now include `mise`, `opencode`, a host-reachable default `opencode` server service on guest TCP port `4096`, `tmux-resurrect`/`tmux-continuum` defaults for `root`, and the `banger-vsock-agent` service used by the SSH reminder and guest health-check path; if you change guest provisioning, document whether users need to rebuild `./runtime/rootfs-docker.ext4` or another base image to pick it up.
|
||||||
- The experimental Void rootfs path now includes the repo's basic dev baseline plus Docker and Compose, alongside boot, SSH, the vsock HTTP health agent, pinned `mise` plus `opencode` for `root`, a `bash` root shell while leaving `/bin/sh` alone, and the `/root` work-seed. Keep further baked-in tooling deliberate and user-driven.
|
- The experimental Void rootfs path now includes the repo's basic dev baseline plus Docker and Compose, alongside boot, SSH, a guest network bootstrap sourced from the kernel `ip=` cmdline, the vsock HTTP health agent, pinned `mise` plus `opencode` for `root`, the default host-reachable `opencode` server service on guest TCP port `4096`, a `bash` root shell while leaving `/bin/sh` alone, and the `/root` work-seed. When `./runtime/void-kernel/` exists, the Void image registration path expects a complete staged Void kernel, initramfs, and modules tree and points `void-exp` at it. Keep further baked-in tooling deliberate and user-driven.
|
||||||
- Rebuilt images also emit a `work-seed.ext4` sidecar used to speed up future VM creates. If you touch `/root` provisioning, verify both the rootfs and the work-seed output.
|
- Rebuilt images also emit a `work-seed.ext4` sidecar used to speed up future VM creates. If you touch `/root` provisioning, verify both the rootfs and the work-seed output.
|
||||||
- The daemon may keep idle TAP devices in a pool for faster creates. Smoke tests should treat `tap-pool-*` devices as reusable capacity, not cleanup leaks.
|
- The daemon may keep idle TAP devices in a pool for faster creates. Smoke tests should treat `tap-pool-*` devices as reusable capacity, not cleanup leaks.
|
||||||
- If you add a new operational workflow, document how to exercise it in `README.md`.
|
- If you add a new operational workflow, document how to exercise it in `README.md`.
|
||||||
|
|
|
||||||
8
Makefile
8
Makefile
|
|
@ -24,7 +24,7 @@ VOID_VM_NAME ?= void-dev
|
||||||
|
|
||||||
.DEFAULT_GOAL := help
|
.DEFAULT_GOAL := help
|
||||||
|
|
||||||
.PHONY: help build banger bangerd test fmt tidy clean rootfs rootfs-void void-register void-vm verify-void install runtime-bundle runtime-package check-runtime bench-create
|
.PHONY: help build banger bangerd test fmt tidy clean rootfs rootfs-void void-kernel void-register void-vm verify-void install runtime-bundle runtime-package check-runtime bench-create
|
||||||
|
|
||||||
help:
|
help:
|
||||||
@printf '%s\n' \
|
@printf '%s\n' \
|
||||||
|
|
@ -39,6 +39,7 @@ help:
|
||||||
' make tidy Run go mod tidy' \
|
' make tidy Run go mod tidy' \
|
||||||
' make clean Remove built Go binaries' \
|
' make clean Remove built Go binaries' \
|
||||||
' make rootfs Rebuild the source-checkout default Debian rootfs image in ./runtime' \
|
' make rootfs Rebuild the source-checkout default Debian rootfs image in ./runtime' \
|
||||||
|
' make void-kernel Download and stage a Void kernel, initramfs, and modules under ./runtime/void-kernel' \
|
||||||
' make rootfs-void Build an experimental Void Linux rootfs and work-seed in ./runtime' \
|
' make rootfs-void Build an experimental Void Linux rootfs and work-seed in ./runtime' \
|
||||||
' make void-register Register or update the experimental Void image as $(VOID_IMAGE_NAME)' \
|
' make void-register Register or update the experimental Void image as $(VOID_IMAGE_NAME)' \
|
||||||
' make void-vm Register the experimental Void image and create a VM named $(VOID_VM_NAME)' \
|
' make void-vm Register the experimental Void image and create a VM named $(VOID_VM_NAME)' \
|
||||||
|
|
@ -107,11 +108,14 @@ install: build check-runtime
|
||||||
rootfs:
|
rootfs:
|
||||||
BANGER_RUNTIME_DIR="$(abspath $(RUNTIME_SOURCE_DIR))" ./make-rootfs.sh
|
BANGER_RUNTIME_DIR="$(abspath $(RUNTIME_SOURCE_DIR))" ./make-rootfs.sh
|
||||||
|
|
||||||
|
void-kernel:
|
||||||
|
BANGER_RUNTIME_DIR="$(abspath $(RUNTIME_SOURCE_DIR))" ./make-void-kernel.sh
|
||||||
|
|
||||||
rootfs-void:
|
rootfs-void:
|
||||||
BANGER_RUNTIME_DIR="$(abspath $(RUNTIME_SOURCE_DIR))" ./make-rootfs-void.sh
|
BANGER_RUNTIME_DIR="$(abspath $(RUNTIME_SOURCE_DIR))" ./make-rootfs-void.sh
|
||||||
|
|
||||||
void-register: build
|
void-register: build
|
||||||
./banger image register --name "$(VOID_IMAGE_NAME)" --rootfs "$(abspath $(RUNTIME_SOURCE_DIR))/rootfs-void.ext4" --work-seed "$(abspath $(RUNTIME_SOURCE_DIR))/rootfs-void.work-seed.ext4" --packages "$(abspath packages.void)"
|
BANGER_RUNTIME_DIR="$(abspath $(RUNTIME_SOURCE_DIR))" VOID_IMAGE_NAME="$(VOID_IMAGE_NAME)" BANGER_BIN="$(abspath ./banger)" ./register-void-image.sh
|
||||||
|
|
||||||
void-vm: void-register
|
void-vm: void-register
|
||||||
./banger vm create --image "$(VOID_IMAGE_NAME)" --name "$(VOID_VM_NAME)"
|
./banger vm create --image "$(VOID_IMAGE_NAME)" --name "$(VOID_VM_NAME)"
|
||||||
|
|
|
||||||
66
README.md
66
README.md
|
|
@ -212,10 +212,11 @@ banger image build --name docker-dev --docker
|
||||||
|
|
||||||
Rebuilt images install a pinned `mise` at `/usr/local/bin/mise`, activate it
|
Rebuilt images install a pinned `mise` at `/usr/local/bin/mise`, activate it
|
||||||
for bash login and interactive shells, install `opencode` through `mise`,
|
for bash login and interactive shells, install `opencode` through `mise`,
|
||||||
configure `tmux-resurrect` plus `tmux-continuum` for `root` with periodic
|
expose `/usr/local/bin/opencode`, configure `tmux-resurrect` plus
|
||||||
autosaves and manual-only restore by default, and bake in the
|
`tmux-continuum` for `root` with periodic autosaves and manual-only restore by
|
||||||
`banger-vsock-agent` systemd service used by the post-SSH reminder path and
|
default, start a host-reachable `opencode serve` service on guest TCP port
|
||||||
guest health checks. They
|
`4096`, and bake in the `banger-vsock-agent` systemd service used by the
|
||||||
|
post-SSH reminder path and guest health checks. They
|
||||||
also emit a `work-seed.ext4` sidecar that lets new VMs clone a prepared `/root`
|
also emit a `work-seed.ext4` sidecar that lets new VMs clone a prepared `/root`
|
||||||
work disk instead of rebuilding it from scratch on every create.
|
work disk instead of rebuilding it from scratch on every create.
|
||||||
|
|
||||||
|
|
@ -225,6 +226,17 @@ banger image show docker-dev
|
||||||
banger image delete docker-dev
|
banger image delete docker-dev
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Promote an existing unmanaged image into a managed one:
|
||||||
|
```bash
|
||||||
|
banger image promote default
|
||||||
|
banger image promote void-exp
|
||||||
|
```
|
||||||
|
|
||||||
|
Promotion copies the image's `rootfs` and optional `work-seed` into the
|
||||||
|
daemon's managed image state directory and keeps the same image ID, so existing
|
||||||
|
VM references stay valid. The image's kernel, initrd, modules, and package
|
||||||
|
manifest paths stay pointed at their current locations.
|
||||||
|
|
||||||
`banger` auto-registers the bundled `default_rootfs` image when it exists. If
|
`banger` auto-registers the bundled `default_rootfs` image when it exists. If
|
||||||
the bundle does not include a separate base `rootfs.ext4`, `image build` falls
|
the bundle does not include a separate base `rootfs.ext4`, `image build` falls
|
||||||
back to using `rootfs-docker.ext4` as its default base image.
|
back to using `rootfs-docker.ext4` as its default base image.
|
||||||
|
|
@ -253,6 +265,10 @@ short best-effort HTTP and HTTPS probes; detected web listeners are shown as
|
||||||
`https://<hostname>.vm:port/`. Older images without `ss` may need rebuilding
|
`https://<hostname>.vm:port/`. Older images without `ss` may need rebuilding
|
||||||
before `vm ports` works.
|
before `vm ports` works.
|
||||||
|
|
||||||
|
Newly rebuilt images also start `opencode serve` by default on guest TCP port
|
||||||
|
`4096`, bound on guest interfaces so the host can reach it directly at the
|
||||||
|
guest IP or via the endpoint shown by `banger vm ports`.
|
||||||
|
|
||||||
## Storage Model
|
## Storage Model
|
||||||
- VMs share a read-only base rootfs image.
|
- VMs share a read-only base rootfs image.
|
||||||
- Each VM gets its own sparse writable system overlay for `/`.
|
- Each VM gets its own sparse writable system overlay for `/`.
|
||||||
|
|
@ -286,7 +302,8 @@ make rootfs
|
||||||
```
|
```
|
||||||
|
|
||||||
That rebuild also regenerates `./runtime/rootfs-docker.work-seed.ext4`, which
|
That rebuild also regenerates `./runtime/rootfs-docker.work-seed.ext4`, which
|
||||||
the daemon uses to speed up future `vm create` calls.
|
the daemon uses to speed up future `vm create` calls, and bakes in the default
|
||||||
|
host-reachable `opencode` server service.
|
||||||
|
|
||||||
If your runtime bundle does not include `./runtime/rootfs.ext4`, pass an
|
If your runtime bundle does not include `./runtime/rootfs.ext4`, pass an
|
||||||
explicit base image instead:
|
explicit base image instead:
|
||||||
|
|
@ -303,25 +320,37 @@ make rootfs
|
||||||
`make rootfs` expects a bootstrapped runtime bundle. If `./runtime/rootfs.ext4`
|
`make rootfs` expects a bootstrapped runtime bundle. If `./runtime/rootfs.ext4`
|
||||||
is not available, pass an explicit `--base-rootfs` to `./make-rootfs.sh`.
|
is not available, pass an explicit `--base-rootfs` to `./make-rootfs.sh`.
|
||||||
Existing VMs keep using their current image and disks; rebuilds only affect VMs
|
Existing VMs keep using their current image and disks; rebuilds only affect VMs
|
||||||
created from the rebuilt image afterward.
|
created from the rebuilt image afterward. Restarting an existing VM is not
|
||||||
|
enough to pick up guest provisioning changes such as the default `opencode`
|
||||||
|
server service.
|
||||||
|
|
||||||
## Experimental Void Rootfs
|
## Experimental Void Rootfs
|
||||||
There is also a separate, opt-in builder for an experimental Void Linux guest
|
There is also a separate, opt-in builder for an experimental Void Linux guest
|
||||||
path:
|
path:
|
||||||
```bash
|
```bash
|
||||||
|
make void-kernel
|
||||||
make rootfs-void
|
make rootfs-void
|
||||||
```
|
```
|
||||||
|
|
||||||
That writes:
|
That writes:
|
||||||
|
- `./runtime/void-kernel/` when `make void-kernel` is used
|
||||||
- `./runtime/rootfs-void.ext4`
|
- `./runtime/rootfs-void.ext4`
|
||||||
- `./runtime/rootfs-void.work-seed.ext4`
|
- `./runtime/rootfs-void.work-seed.ext4`
|
||||||
|
|
||||||
This path is intentionally local-only and does not change the default Debian
|
This path is intentionally local-only and does not change the default Debian
|
||||||
image flow. It reuses the current runtime bundle kernel, initrd, and modules,
|
image flow. `make void-kernel` stages an actual Void `linux6.12` kernel package
|
||||||
but builds a lean `x86_64-glibc` Void userspace with:
|
under `./runtime/void-kernel/`, including the raw `vmlinuz`, extracted
|
||||||
|
Firecracker `vmlinux`, a matching `initramfs`, the matching config, and the
|
||||||
|
matching modules tree. The initramfs is generated locally with `dracut`
|
||||||
|
against the downloaded Void sysroot so the kernel, initrd, and modules stay
|
||||||
|
aligned. `make rootfs-void` then prefers that staged modules tree when it exists;
|
||||||
|
otherwise it falls back to the runtime bundle modules. The rootfs builder
|
||||||
|
itself still builds a lean `x86_64-glibc` Void userspace with:
|
||||||
- `bash` installed for interactive/admin use
|
- `bash` installed for interactive/admin use
|
||||||
- pinned `mise` installed at `/usr/local/bin/mise`, activated for `root` bash shells
|
- pinned `mise` installed at `/usr/local/bin/mise`, activated for `root` bash shells
|
||||||
- `opencode` installed through `mise`, with `/usr/local/bin/opencode` available by default
|
- `opencode` installed through `mise`, with `/usr/local/bin/opencode` available by default
|
||||||
|
- a guest network bootstrap that configures the VM NIC from the kernel `ip=` boot arg
|
||||||
|
- a host-reachable `opencode serve` runit service enabled on guest TCP port `4096`
|
||||||
- `docker` plus `docker-compose` installed from Void packages
|
- `docker` plus `docker-compose` installed from Void packages
|
||||||
- the `docker` runit service enabled, with Docker netfilter/forwarding kernel prep
|
- the `docker` runit service enabled, with Docker netfilter/forwarding kernel prep
|
||||||
- `openssh` enabled under runit
|
- `openssh` enabled under runit
|
||||||
|
|
@ -333,26 +362,30 @@ It still keeps some Debian-oriented extras out for now:
|
||||||
- no tmux plugin defaults
|
- no tmux plugin defaults
|
||||||
|
|
||||||
The builder fetches official static XBPS tools and packages from the Void
|
The builder fetches official static XBPS tools and packages from the Void
|
||||||
mirror during the build. It currently supports only `x86_64-glibc`.
|
mirror during the build. The kernel fetcher and rootfs builder currently
|
||||||
|
support only `x86_64`.
|
||||||
|
|
||||||
The package set comes from [`packages.void`](/home/thales/projects/personal/banger/packages.void).
|
The package set comes from [`packages.void`](/home/thales/projects/personal/banger/packages.void).
|
||||||
You can override the mirror, size, or output path directly:
|
You can override the mirror, size, output path, or kernel package directly:
|
||||||
```bash
|
```bash
|
||||||
|
./make-void-kernel.sh --kernel-package linux6.12
|
||||||
./make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G
|
./make-rootfs-void.sh --mirror https://repo-default.voidlinux.org --size 2G
|
||||||
```
|
```
|
||||||
|
|
||||||
The fastest local iteration loop does not require changing your default image
|
The fastest local iteration loop does not require changing your default image
|
||||||
config at all:
|
config at all:
|
||||||
```bash
|
```bash
|
||||||
|
make void-kernel
|
||||||
make rootfs-void
|
make rootfs-void
|
||||||
make void-register
|
make void-register
|
||||||
./banger vm create --image void-exp --name void-dev
|
./banger vm create --image void-exp --name void-dev
|
||||||
./banger vm ssh void-dev
|
./banger vm ssh void-dev
|
||||||
```
|
```
|
||||||
|
|
||||||
Rebuild the Void rootfs and recreate existing `void-exp` VMs after changing the
|
Rebuild the staged Void kernel or Void rootfs, then recreate existing
|
||||||
package set or guest provisioning; restart alone will not update the image
|
`void-exp` VMs after changing the package set, guest provisioning, or staged
|
||||||
contents or `/root` work-seed.
|
kernel artifacts; restart alone will not update the image contents, kernel, or
|
||||||
|
`/root` work-seed.
|
||||||
|
|
||||||
There is also a smoke path for the experimental image:
|
There is also a smoke path for the experimental image:
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -361,7 +394,9 @@ make verify-void
|
||||||
|
|
||||||
`make void-register` uses the unmanaged image registration path to create or
|
`make void-register` uses the unmanaged image registration path to create or
|
||||||
update a `void-exp` image record in place, so repeated rebuilds do not require
|
update a `void-exp` image record in place, so repeated rebuilds do not require
|
||||||
editing `~/.config/banger/config.toml`.
|
editing `~/.config/banger/config.toml`. It expects a complete staged Void
|
||||||
|
kernel set under `./runtime/void-kernel/` and points the experimental image at
|
||||||
|
the staged Void `vmlinux`, `initramfs`, and matching modules tree.
|
||||||
|
|
||||||
There is also a one-step helper target:
|
There is also a one-step helper target:
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -390,6 +425,9 @@ banger image register \
|
||||||
--name void-exp \
|
--name void-exp \
|
||||||
--rootfs ./runtime/rootfs-void.ext4 \
|
--rootfs ./runtime/rootfs-void.ext4 \
|
||||||
--work-seed ./runtime/rootfs-void.work-seed.ext4 \
|
--work-seed ./runtime/rootfs-void.work-seed.ext4 \
|
||||||
|
--kernel ./runtime/void-kernel/boot/vmlinux-6.12.77_1 \
|
||||||
|
--initrd ./runtime/void-kernel/boot/initramfs-6.12.77_1.img \
|
||||||
|
--modules ./runtime/void-kernel/lib/modules/6.12.77_1 \
|
||||||
--packages ./packages.void
|
--packages ./packages.void
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
28
customize.sh
28
customize.sh
|
|
@ -418,6 +418,12 @@ DEBIAN_FRONTEND=noninteractive apt-get -y upgrade
|
||||||
DEBIAN_FRONTEND=noninteractive apt-get -y install ${APT_PACKAGES_ESCAPED}
|
DEBIAN_FRONTEND=noninteractive apt-get -y install ${APT_PACKAGES_ESCAPED}
|
||||||
curl -fsSL https://mise.run | MISE_INSTALL_PATH=\"$MISE_INSTALL_PATH\" MISE_VERSION=\"$MISE_VERSION\" sh
|
curl -fsSL https://mise.run | MISE_INSTALL_PATH=\"$MISE_INSTALL_PATH\" MISE_VERSION=\"$MISE_VERSION\" sh
|
||||||
\"$MISE_INSTALL_PATH\" use -g github:anomalyco/opencode
|
\"$MISE_INSTALL_PATH\" use -g github:anomalyco/opencode
|
||||||
|
\"$MISE_INSTALL_PATH\" reshim
|
||||||
|
if [[ ! -e /root/.local/share/mise/shims/opencode ]]; then
|
||||||
|
echo 'opencode shim not found after mise install' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
ln -snf /root/.local/share/mise/shims/opencode /usr/local/bin/opencode
|
||||||
mkdir -p /etc/profile.d
|
mkdir -p /etc/profile.d
|
||||||
cat > /etc/profile.d/mise.sh <<'MISEPROFILE'
|
cat > /etc/profile.d/mise.sh <<'MISEPROFILE'
|
||||||
if [ -n \"\${BASH_VERSION:-}\" ] && [ -x \"$MISE_INSTALL_PATH\" ]; then
|
if [ -n \"\${BASH_VERSION:-}\" ] && [ -x \"$MISE_INSTALL_PATH\" ]; then
|
||||||
|
|
@ -441,6 +447,28 @@ fi
|
||||||
rm -f /root/get-docker /root/get-docker.sh /tmp/get-docker /tmp/get-docker.sh
|
rm -f /root/get-docker /root/get-docker.sh /tmp/get-docker /tmp/get-docker.sh
|
||||||
chmod 0755 /usr/local/bin/banger-vsock-agent
|
chmod 0755 /usr/local/bin/banger-vsock-agent
|
||||||
mkdir -p /etc/modules-load.d /etc/systemd/system
|
mkdir -p /etc/modules-load.d /etc/systemd/system
|
||||||
|
cat > /etc/systemd/system/banger-opencode.service <<'EOF'
|
||||||
|
[Unit]
|
||||||
|
Description=Banger opencode server
|
||||||
|
After=network.target
|
||||||
|
RequiresMountsFor=/root
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
Environment=HOME=/root
|
||||||
|
WorkingDirectory=/root
|
||||||
|
ExecStart=/usr/local/bin/opencode serve --hostname 0.0.0.0 --port 4096
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=1
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
chmod 0644 /etc/systemd/system/banger-opencode.service
|
||||||
|
if command -v systemctl >/dev/null 2>&1; then
|
||||||
|
systemctl daemon-reload || true
|
||||||
|
systemctl enable --now banger-opencode.service || true
|
||||||
|
fi
|
||||||
cat > /etc/modules-load.d/banger-vsock.conf <<'EOF'
|
cat > /etc/modules-load.d/banger-vsock.conf <<'EOF'
|
||||||
vsock
|
vsock
|
||||||
vmw_vsock_virtio_transport
|
vmw_vsock_virtio_transport
|
||||||
|
|
|
||||||
|
|
@ -3,8 +3,12 @@
|
||||||
# Copy the values you want into ~/.config/banger/config.toml and replace
|
# Copy the values you want into ~/.config/banger/config.toml and replace
|
||||||
# /abs/path/to/banger with your checkout path. Do not set default_base_rootfs
|
# /abs/path/to/banger with your checkout path. Do not set default_base_rootfs
|
||||||
# to the Void image yet; banger image build still assumes the Debian flow.
|
# to the Void image yet; banger image build still assumes the Debian flow.
|
||||||
|
# If you run `make void-kernel`, also merge the commented kernel/initrd/modules lines.
|
||||||
|
|
||||||
runtime_dir = "/abs/path/to/banger/runtime"
|
runtime_dir = "/abs/path/to/banger/runtime"
|
||||||
default_image_name = "void-exp"
|
default_image_name = "void-exp"
|
||||||
default_rootfs = "/abs/path/to/banger/runtime/rootfs-void.ext4"
|
default_rootfs = "/abs/path/to/banger/runtime/rootfs-void.ext4"
|
||||||
default_work_seed = "/abs/path/to/banger/runtime/rootfs-void.work-seed.ext4"
|
default_work_seed = "/abs/path/to/banger/runtime/rootfs-void.work-seed.ext4"
|
||||||
|
# default_kernel = "/abs/path/to/banger/runtime/void-kernel/boot/vmlinux-6.12.77_1"
|
||||||
|
# default_initrd = "/abs/path/to/banger/runtime/void-kernel/boot/initramfs-6.12.77_1.img"
|
||||||
|
# default_modules_dir = "/abs/path/to/banger/runtime/void-kernel/lib/modules/6.12.77_1"
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,10 @@
|
||||||
package api
|
package api
|
||||||
|
|
||||||
import "banger/internal/model"
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"banger/internal/model"
|
||||||
|
)
|
||||||
|
|
||||||
type Empty struct{}
|
type Empty struct{}
|
||||||
|
|
||||||
|
|
@ -24,6 +28,32 @@ type VMCreateParams struct {
|
||||||
NoStart bool `json:"no_start,omitempty"`
|
NoStart bool `json:"no_start,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type VMCreateStatusParams struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VMCreateOperation struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
VMID string `json:"vm_id,omitempty"`
|
||||||
|
VMName string `json:"vm_name,omitempty"`
|
||||||
|
Stage string `json:"stage,omitempty"`
|
||||||
|
Detail string `json:"detail,omitempty"`
|
||||||
|
StartedAt time.Time `json:"started_at,omitempty"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at,omitempty"`
|
||||||
|
Done bool `json:"done"`
|
||||||
|
Success bool `json:"success"`
|
||||||
|
Error string `json:"error,omitempty"`
|
||||||
|
VM *model.VMRecord `json:"vm,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VMCreateBeginResult struct {
|
||||||
|
Operation VMCreateOperation `json:"operation"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VMCreateStatusResult struct {
|
||||||
|
Operation VMCreateOperation `json:"operation"`
|
||||||
|
}
|
||||||
|
|
||||||
type VMRefParams struct {
|
type VMRefParams struct {
|
||||||
IDOrName string `json:"id_or_name"`
|
IDOrName string `json:"id_or_name"`
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -46,6 +46,16 @@ var (
|
||||||
vmHealthFunc = func(ctx context.Context, socketPath, idOrName string) (api.VMHealthResult, error) {
|
vmHealthFunc = func(ctx context.Context, socketPath, idOrName string) (api.VMHealthResult, error) {
|
||||||
return rpc.Call[api.VMHealthResult](ctx, socketPath, "vm.health", api.VMRefParams{IDOrName: idOrName})
|
return rpc.Call[api.VMHealthResult](ctx, socketPath, "vm.health", api.VMRefParams{IDOrName: idOrName})
|
||||||
}
|
}
|
||||||
|
vmCreateBeginFunc = func(ctx context.Context, socketPath string, params api.VMCreateParams) (api.VMCreateBeginResult, error) {
|
||||||
|
return rpc.Call[api.VMCreateBeginResult](ctx, socketPath, "vm.create.begin", params)
|
||||||
|
}
|
||||||
|
vmCreateStatusFunc = func(ctx context.Context, socketPath, operationID string) (api.VMCreateStatusResult, error) {
|
||||||
|
return rpc.Call[api.VMCreateStatusResult](ctx, socketPath, "vm.create.status", api.VMCreateStatusParams{ID: operationID})
|
||||||
|
}
|
||||||
|
vmCreateCancelFunc = func(ctx context.Context, socketPath, operationID string) error {
|
||||||
|
_, err := rpc.Call[api.Empty](ctx, socketPath, "vm.create.cancel", api.VMCreateStatusParams{ID: operationID})
|
||||||
|
return err
|
||||||
|
}
|
||||||
vmPortsFunc = func(ctx context.Context, socketPath, idOrName string) (api.VMPortsResult, error) {
|
vmPortsFunc = func(ctx context.Context, socketPath, idOrName string) (api.VMPortsResult, error) {
|
||||||
return rpc.Call[api.VMPortsResult](ctx, socketPath, "vm.ports", api.VMRefParams{IDOrName: idOrName})
|
return rpc.Call[api.VMPortsResult](ctx, socketPath, "vm.ports", api.VMRefParams{IDOrName: idOrName})
|
||||||
}
|
}
|
||||||
|
|
@ -323,11 +333,11 @@ func newVMCreateCommand() *cobra.Command {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
result, err := rpc.Call[api.VMShowResult](cmd.Context(), layout.SocketPath, "vm.create", params)
|
vm, err := runVMCreate(cmd.Context(), layout.SocketPath, cmd.ErrOrStderr(), params)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return printVMSummary(cmd.OutOrStdout(), result.VM)
|
return printVMSummary(cmd.OutOrStdout(), vm)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
cmd.Flags().StringVar(&name, "name", "", "vm name")
|
cmd.Flags().StringVar(&name, "name", "", "vm name")
|
||||||
|
|
@ -575,6 +585,7 @@ func newImageCommand() *cobra.Command {
|
||||||
cmd.AddCommand(
|
cmd.AddCommand(
|
||||||
newImageBuildCommand(),
|
newImageBuildCommand(),
|
||||||
newImageRegisterCommand(),
|
newImageRegisterCommand(),
|
||||||
|
newImagePromoteCommand(),
|
||||||
newImageListCommand(),
|
newImageListCommand(),
|
||||||
newImageShowCommand(),
|
newImageShowCommand(),
|
||||||
newImageDeleteCommand(),
|
newImageDeleteCommand(),
|
||||||
|
|
@ -651,6 +662,28 @@ func newImageRegisterCommand() *cobra.Command {
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func newImagePromoteCommand() *cobra.Command {
|
||||||
|
return &cobra.Command{
|
||||||
|
Use: "promote <id-or-name>",
|
||||||
|
Short: "Promote an unmanaged image to a managed artifact",
|
||||||
|
Args: exactArgsUsage(1, "usage: banger image promote <id-or-name>"),
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
if err := system.EnsureSudo(cmd.Context()); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
layout, _, err := ensureDaemon(cmd.Context())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
result, err := rpc.Call[api.ImageShowResult](cmd.Context(), layout.SocketPath, "image.promote", api.ImageRefParams{IDOrName: args[0]})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return printImageSummary(cmd.OutOrStdout(), result.Image)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func newImageListCommand() *cobra.Command {
|
func newImageListCommand() *cobra.Command {
|
||||||
return &cobra.Command{
|
return &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
|
|
@ -1255,6 +1288,141 @@ type anyWriter interface {
|
||||||
Write(p []byte) (n int, err error)
|
Write(p []byte) (n int, err error)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func runVMCreate(ctx context.Context, socketPath string, stderr io.Writer, params api.VMCreateParams) (model.VMRecord, error) {
|
||||||
|
begin, err := vmCreateBeginFunc(ctx, socketPath, params)
|
||||||
|
if err != nil {
|
||||||
|
return model.VMRecord{}, err
|
||||||
|
}
|
||||||
|
renderer := newVMCreateProgressRenderer(stderr)
|
||||||
|
renderer.render(begin.Operation)
|
||||||
|
|
||||||
|
op := begin.Operation
|
||||||
|
for {
|
||||||
|
if op.Done {
|
||||||
|
renderer.render(op)
|
||||||
|
if op.Success && op.VM != nil {
|
||||||
|
return *op.VM, nil
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(op.Error) == "" {
|
||||||
|
return model.VMRecord{}, errors.New("vm create failed")
|
||||||
|
}
|
||||||
|
return model.VMRecord{}, errors.New(op.Error)
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
cancelCtx, cancel := context.WithTimeout(context.Background(), time.Second)
|
||||||
|
defer cancel()
|
||||||
|
_ = vmCreateCancelFunc(cancelCtx, socketPath, op.ID)
|
||||||
|
return model.VMRecord{}, ctx.Err()
|
||||||
|
case <-time.After(200 * time.Millisecond):
|
||||||
|
}
|
||||||
|
|
||||||
|
status, err := vmCreateStatusFunc(ctx, socketPath, op.ID)
|
||||||
|
if err != nil {
|
||||||
|
if ctx.Err() != nil {
|
||||||
|
cancelCtx, cancel := context.WithTimeout(context.Background(), time.Second)
|
||||||
|
defer cancel()
|
||||||
|
_ = vmCreateCancelFunc(cancelCtx, socketPath, op.ID)
|
||||||
|
return model.VMRecord{}, ctx.Err()
|
||||||
|
}
|
||||||
|
return model.VMRecord{}, err
|
||||||
|
}
|
||||||
|
op = status.Operation
|
||||||
|
renderer.render(op)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type vmCreateProgressRenderer struct {
|
||||||
|
out io.Writer
|
||||||
|
enabled bool
|
||||||
|
lastLine string
|
||||||
|
}
|
||||||
|
|
||||||
|
func newVMCreateProgressRenderer(out io.Writer) *vmCreateProgressRenderer {
|
||||||
|
return &vmCreateProgressRenderer{
|
||||||
|
out: out,
|
||||||
|
enabled: writerSupportsProgress(out),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *vmCreateProgressRenderer) render(op api.VMCreateOperation) {
|
||||||
|
if r == nil || !r.enabled {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
line := formatVMCreateProgress(op)
|
||||||
|
if line == "" || line == r.lastLine {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
r.lastLine = line
|
||||||
|
_, _ = fmt.Fprintln(r.out, line)
|
||||||
|
}
|
||||||
|
|
||||||
|
func writerSupportsProgress(out io.Writer) bool {
|
||||||
|
file, ok := out.(*os.File)
|
||||||
|
if !ok {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
info, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return info.Mode()&os.ModeCharDevice != 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatVMCreateProgress(op api.VMCreateOperation) string {
|
||||||
|
stage := strings.TrimSpace(op.Stage)
|
||||||
|
detail := strings.TrimSpace(op.Detail)
|
||||||
|
label := vmCreateStageLabel(stage)
|
||||||
|
if label == "" && detail == "" {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
if label == "" {
|
||||||
|
return "[vm create] " + detail
|
||||||
|
}
|
||||||
|
if detail == "" {
|
||||||
|
return "[vm create] " + label
|
||||||
|
}
|
||||||
|
return "[vm create] " + label + ": " + detail
|
||||||
|
}
|
||||||
|
|
||||||
|
func vmCreateStageLabel(stage string) string {
|
||||||
|
switch strings.TrimSpace(stage) {
|
||||||
|
case "queued":
|
||||||
|
return "queued"
|
||||||
|
case "resolve_image":
|
||||||
|
return "resolving image"
|
||||||
|
case "reserve_vm":
|
||||||
|
return "allocating vm"
|
||||||
|
case "preflight":
|
||||||
|
return "checking host prerequisites"
|
||||||
|
case "prepare_rootfs":
|
||||||
|
return "preparing root filesystem"
|
||||||
|
case "prepare_host_features":
|
||||||
|
return "preparing host features"
|
||||||
|
case "prepare_work_disk":
|
||||||
|
return "preparing work disk"
|
||||||
|
case "boot_firecracker":
|
||||||
|
return "starting firecracker"
|
||||||
|
case "wait_vsock_agent":
|
||||||
|
return "waiting for vsock agent"
|
||||||
|
case "wait_guest_ready":
|
||||||
|
return "waiting for guest services"
|
||||||
|
case "wait_opencode":
|
||||||
|
return "waiting for opencode"
|
||||||
|
case "apply_dns":
|
||||||
|
return "publishing dns"
|
||||||
|
case "apply_nat":
|
||||||
|
return "configuring nat"
|
||||||
|
case "finalize":
|
||||||
|
return "finalizing"
|
||||||
|
case "ready":
|
||||||
|
return "ready"
|
||||||
|
default:
|
||||||
|
return strings.ReplaceAll(stage, "_", " ")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func shortID(id string) string {
|
func shortID(id string) string {
|
||||||
if len(id) <= 12 {
|
if len(id) <= 12 {
|
||||||
return id
|
return id
|
||||||
|
|
|
||||||
|
|
@ -170,6 +170,17 @@ func TestImageRegisterFlagsExist(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestImagePromoteCommandExists(t *testing.T) {
|
||||||
|
root := NewBangerCommand()
|
||||||
|
image, _, err := root.Find([]string{"image"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("find image: %v", err)
|
||||||
|
}
|
||||||
|
if _, _, err := image.Find([]string{"promote"}); err != nil {
|
||||||
|
t.Fatalf("find promote: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestVMKillFlagsExist(t *testing.T) {
|
func TestVMKillFlagsExist(t *testing.T) {
|
||||||
root := NewBangerCommand()
|
root := NewBangerCommand()
|
||||||
vm, _, err := root.Find([]string{"vm"})
|
vm, _, err := root.Find([]string{"vm"})
|
||||||
|
|
@ -304,6 +315,95 @@ func TestVMCreateParamsFromFlagsRejectsNonPositiveCPUAndMemory(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestRunVMCreatePollsUntilDone(t *testing.T) {
|
||||||
|
origBegin := vmCreateBeginFunc
|
||||||
|
origStatus := vmCreateStatusFunc
|
||||||
|
origCancel := vmCreateCancelFunc
|
||||||
|
t.Cleanup(func() {
|
||||||
|
vmCreateBeginFunc = origBegin
|
||||||
|
vmCreateStatusFunc = origStatus
|
||||||
|
vmCreateCancelFunc = origCancel
|
||||||
|
})
|
||||||
|
|
||||||
|
vm := model.VMRecord{
|
||||||
|
ID: "vm-id",
|
||||||
|
Name: "devbox",
|
||||||
|
Spec: model.VMSpec{WorkDiskSizeBytes: model.DefaultWorkDiskSize},
|
||||||
|
Runtime: model.VMRuntime{
|
||||||
|
State: model.VMStateRunning,
|
||||||
|
GuestIP: "172.16.0.2",
|
||||||
|
DNSName: "devbox.vm",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
vmCreateBeginFunc = func(context.Context, string, api.VMCreateParams) (api.VMCreateBeginResult, error) {
|
||||||
|
return api.VMCreateBeginResult{
|
||||||
|
Operation: api.VMCreateOperation{
|
||||||
|
ID: "op-1",
|
||||||
|
Stage: "prepare_work_disk",
|
||||||
|
Detail: "cloning work seed",
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
statusCalls := 0
|
||||||
|
vmCreateStatusFunc = func(context.Context, string, string) (api.VMCreateStatusResult, error) {
|
||||||
|
statusCalls++
|
||||||
|
if statusCalls == 1 {
|
||||||
|
return api.VMCreateStatusResult{
|
||||||
|
Operation: api.VMCreateOperation{
|
||||||
|
ID: "op-1",
|
||||||
|
Stage: "wait_opencode",
|
||||||
|
Detail: "waiting for opencode on guest port 4096",
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
return api.VMCreateStatusResult{
|
||||||
|
Operation: api.VMCreateOperation{
|
||||||
|
ID: "op-1",
|
||||||
|
Stage: "ready",
|
||||||
|
Detail: "vm is ready",
|
||||||
|
Done: true,
|
||||||
|
Success: true,
|
||||||
|
VM: &vm,
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
vmCreateCancelFunc = func(context.Context, string, string) error {
|
||||||
|
t.Fatal("cancel should not be called")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
got, err := runVMCreate(context.Background(), "/tmp/bangerd.sock", &bytes.Buffer{}, api.VMCreateParams{Name: "devbox"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("runVMCreate: %v", err)
|
||||||
|
}
|
||||||
|
if got.Name != vm.Name || got.Runtime.GuestIP != vm.Runtime.GuestIP {
|
||||||
|
t.Fatalf("vm = %+v, want %+v", got, vm)
|
||||||
|
}
|
||||||
|
if statusCalls != 2 {
|
||||||
|
t.Fatalf("statusCalls = %d, want 2", statusCalls)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestVMCreateProgressRendererSuppressesDuplicateLines(t *testing.T) {
|
||||||
|
var stderr bytes.Buffer
|
||||||
|
renderer := &vmCreateProgressRenderer{out: &stderr, enabled: true}
|
||||||
|
|
||||||
|
renderer.render(api.VMCreateOperation{Stage: "prepare_work_disk", Detail: "cloning work seed"})
|
||||||
|
renderer.render(api.VMCreateOperation{Stage: "prepare_work_disk", Detail: "cloning work seed"})
|
||||||
|
renderer.render(api.VMCreateOperation{Stage: "wait_opencode", Detail: "waiting for opencode on guest port 4096"})
|
||||||
|
|
||||||
|
lines := strings.Split(strings.TrimSpace(stderr.String()), "\n")
|
||||||
|
if len(lines) != 2 {
|
||||||
|
t.Fatalf("rendered lines = %q, want 2 lines", stderr.String())
|
||||||
|
}
|
||||||
|
if lines[0] != "[vm create] preparing work disk: cloning work seed" {
|
||||||
|
t.Fatalf("first line = %q", lines[0])
|
||||||
|
}
|
||||||
|
if lines[1] != "[vm create] waiting for opencode: waiting for opencode on guest port 4096" {
|
||||||
|
t.Fatalf("second line = %q", lines[1])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestVMSetParamsFromFlagsConflict(t *testing.T) {
|
func TestVMSetParamsFromFlagsConflict(t *testing.T) {
|
||||||
if _, err := vmSetParamsFromFlags("devbox", -1, -1, "", true, true); err == nil {
|
if _, err := vmSetParamsFromFlags("devbox", -1, -1, "", true, true); err == nil {
|
||||||
t.Fatal("expected nat conflict error")
|
t.Fatal("expected nat conflict error")
|
||||||
|
|
|
||||||
|
|
@ -56,6 +56,7 @@ func (d *Daemon) registeredCapabilities() []vmCapability {
|
||||||
}
|
}
|
||||||
return []vmCapability{
|
return []vmCapability{
|
||||||
workDiskCapability{},
|
workDiskCapability{},
|
||||||
|
opencodeCapability{},
|
||||||
dnsCapability{},
|
dnsCapability{},
|
||||||
natCapability{},
|
natCapability{},
|
||||||
}
|
}
|
||||||
|
|
@ -103,6 +104,14 @@ func (d *Daemon) prepareCapabilityHosts(ctx context.Context, vm *model.VMRecord,
|
||||||
|
|
||||||
func (d *Daemon) postStartCapabilities(ctx context.Context, vm model.VMRecord, image model.Image) error {
|
func (d *Daemon) postStartCapabilities(ctx context.Context, vm model.VMRecord, image model.Image) error {
|
||||||
for _, capability := range d.registeredCapabilities() {
|
for _, capability := range d.registeredCapabilities() {
|
||||||
|
switch capability.Name() {
|
||||||
|
case "dns":
|
||||||
|
vmCreateStage(ctx, "apply_dns", "publishing vm dns record")
|
||||||
|
case "nat":
|
||||||
|
if vm.Spec.NATEnabled {
|
||||||
|
vmCreateStage(ctx, "apply_nat", "configuring nat")
|
||||||
|
}
|
||||||
|
}
|
||||||
if hook, ok := capability.(postStartCapability); ok {
|
if hook, ok := capability.(postStartCapability); ok {
|
||||||
if err := hook.PostStart(ctx, d, vm, image); err != nil {
|
if err := hook.PostStart(ctx, d, vm, image); err != nil {
|
||||||
return err
|
return err
|
||||||
|
|
@ -191,10 +200,11 @@ func (workDiskCapability) ContributeMachine(cfg *firecracker.MachineConfig, vm m
|
||||||
}
|
}
|
||||||
|
|
||||||
func (workDiskCapability) PrepareHost(ctx context.Context, d *Daemon, vm *model.VMRecord, image model.Image) error {
|
func (workDiskCapability) PrepareHost(ctx context.Context, d *Daemon, vm *model.VMRecord, image model.Image) error {
|
||||||
if err := d.ensureWorkDisk(ctx, vm, image); err != nil {
|
prep, err := d.ensureWorkDisk(ctx, vm, image)
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return d.ensureAuthorizedKeyOnWorkDisk(ctx, vm)
|
return d.ensureAuthorizedKeyOnWorkDisk(ctx, vm, image, prep)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (workDiskCapability) AddDoctorChecks(_ context.Context, d *Daemon, report *system.Report) {
|
func (workDiskCapability) AddDoctorChecks(_ context.Context, d *Daemon, report *system.Report) {
|
||||||
|
|
|
||||||
|
|
@ -143,3 +143,15 @@ func TestContributeHooksPopulateGuestAndMachineConfig(t *testing.T) {
|
||||||
t.Fatalf("guest fstab = %q, want %q", fstab, want)
|
t.Fatalf("guest fstab = %q, want %q", fstab, want)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestRegisteredCapabilitiesIncludeOpencode(t *testing.T) {
|
||||||
|
d := &Daemon{}
|
||||||
|
var names []string
|
||||||
|
for _, capability := range d.registeredCapabilities() {
|
||||||
|
names = append(names, capability.Name())
|
||||||
|
}
|
||||||
|
want := []string{"work-disk", "opencode", "dns", "nat"}
|
||||||
|
if !reflect.DeepEqual(names, want) {
|
||||||
|
t.Fatalf("capabilities = %v, want %v", names, want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
|
||||||
|
|
@ -32,6 +32,8 @@ type Daemon struct {
|
||||||
runner system.CommandRunner
|
runner system.CommandRunner
|
||||||
logger *slog.Logger
|
logger *slog.Logger
|
||||||
mu sync.Mutex
|
mu sync.Mutex
|
||||||
|
createOpsMu sync.Mutex
|
||||||
|
createOps map[string]*vmCreateOperationState
|
||||||
vmLocksMu sync.Mutex
|
vmLocksMu sync.Mutex
|
||||||
vmLocks map[string]*sync.Mutex
|
vmLocks map[string]*sync.Mutex
|
||||||
tapPoolMu sync.Mutex
|
tapPoolMu sync.Mutex
|
||||||
|
|
@ -249,6 +251,27 @@ func (d *Daemon) dispatch(ctx context.Context, req rpc.Request) rpc.Response {
|
||||||
}
|
}
|
||||||
vm, err := d.CreateVM(ctx, params)
|
vm, err := d.CreateVM(ctx, params)
|
||||||
return marshalResultOrError(api.VMShowResult{VM: vm}, err)
|
return marshalResultOrError(api.VMShowResult{VM: vm}, err)
|
||||||
|
case "vm.create.begin":
|
||||||
|
params, err := rpc.DecodeParams[api.VMCreateParams](req)
|
||||||
|
if err != nil {
|
||||||
|
return rpc.NewError("bad_request", err.Error())
|
||||||
|
}
|
||||||
|
op, err := d.BeginVMCreate(ctx, params)
|
||||||
|
return marshalResultOrError(api.VMCreateBeginResult{Operation: op}, err)
|
||||||
|
case "vm.create.status":
|
||||||
|
params, err := rpc.DecodeParams[api.VMCreateStatusParams](req)
|
||||||
|
if err != nil {
|
||||||
|
return rpc.NewError("bad_request", err.Error())
|
||||||
|
}
|
||||||
|
op, err := d.VMCreateStatus(ctx, params.ID)
|
||||||
|
return marshalResultOrError(api.VMCreateStatusResult{Operation: op}, err)
|
||||||
|
case "vm.create.cancel":
|
||||||
|
params, err := rpc.DecodeParams[api.VMCreateStatusParams](req)
|
||||||
|
if err != nil {
|
||||||
|
return rpc.NewError("bad_request", err.Error())
|
||||||
|
}
|
||||||
|
err = d.CancelVMCreate(ctx, params.ID)
|
||||||
|
return marshalResultOrError(api.Empty{}, err)
|
||||||
case "vm.list":
|
case "vm.list":
|
||||||
vms, err := d.store.ListVMs(ctx)
|
vms, err := d.store.ListVMs(ctx)
|
||||||
return marshalResultOrError(api.VMListResult{VMs: vms}, err)
|
return marshalResultOrError(api.VMListResult{VMs: vms}, err)
|
||||||
|
|
@ -376,6 +399,13 @@ func (d *Daemon) dispatch(ctx context.Context, req rpc.Request) rpc.Response {
|
||||||
}
|
}
|
||||||
image, err := d.RegisterImage(ctx, params)
|
image, err := d.RegisterImage(ctx, params)
|
||||||
return marshalResultOrError(api.ImageShowResult{Image: image}, err)
|
return marshalResultOrError(api.ImageShowResult{Image: image}, err)
|
||||||
|
case "image.promote":
|
||||||
|
params, err := rpc.DecodeParams[api.ImageRefParams](req)
|
||||||
|
if err != nil {
|
||||||
|
return rpc.NewError("bad_request", err.Error())
|
||||||
|
}
|
||||||
|
image, err := d.PromoteImage(ctx, params.IDOrName)
|
||||||
|
return marshalResultOrError(api.ImageShowResult{Image: image}, err)
|
||||||
case "image.delete":
|
case "image.delete":
|
||||||
params, err := rpc.DecodeParams[api.ImageRefParams](req)
|
params, err := rpc.DecodeParams[api.ImageRefParams](req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
@ -405,6 +435,7 @@ func (d *Daemon) backgroundLoop() {
|
||||||
if err := d.stopStaleVMs(context.Background()); err != nil && d.logger != nil {
|
if err := d.stopStaleVMs(context.Background()); err != nil && d.logger != nil {
|
||||||
d.logger.Error("background stale sweep failed", "error", err.Error())
|
d.logger.Error("background stale sweep failed", "error", err.Error())
|
||||||
}
|
}
|
||||||
|
d.pruneVMCreateOperations(time.Now().Add(-10 * time.Minute))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -2,6 +2,7 @@ package daemon
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"net"
|
"net"
|
||||||
|
|
@ -13,6 +14,7 @@ import (
|
||||||
|
|
||||||
"banger/internal/api"
|
"banger/internal/api"
|
||||||
"banger/internal/model"
|
"banger/internal/model"
|
||||||
|
"banger/internal/paths"
|
||||||
"banger/internal/rpc"
|
"banger/internal/rpc"
|
||||||
"banger/internal/store"
|
"banger/internal/store"
|
||||||
)
|
)
|
||||||
|
|
@ -368,6 +370,178 @@ func TestRegisterImageRejectsManagedOverwrite(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestPromoteImageCopiesArtifactsAndPreservesIdentity(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
rootfs, kernel, initrd, modulesDir, packages := writeDefaultImageArtifacts(t, dir)
|
||||||
|
workSeed := filepath.Join(dir, "rootfs-docker.work-seed.ext4")
|
||||||
|
workSeedContent := []byte("seed-data")
|
||||||
|
if err := os.WriteFile(workSeed, workSeedContent, 0o644); err != nil {
|
||||||
|
t.Fatalf("WriteFile(workSeed): %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
db := openDefaultImageStore(t, dir)
|
||||||
|
now := time.Date(2026, time.March, 20, 12, 0, 0, 0, time.UTC)
|
||||||
|
existing := model.Image{
|
||||||
|
ID: "promote-image-id",
|
||||||
|
Name: "default",
|
||||||
|
Managed: false,
|
||||||
|
RootfsPath: rootfs,
|
||||||
|
WorkSeedPath: workSeed,
|
||||||
|
KernelPath: kernel,
|
||||||
|
InitrdPath: initrd,
|
||||||
|
ModulesDir: modulesDir,
|
||||||
|
PackagesPath: packages,
|
||||||
|
Docker: true,
|
||||||
|
CreatedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
|
}
|
||||||
|
if err := db.UpsertImage(context.Background(), existing); err != nil {
|
||||||
|
t.Fatalf("UpsertImage: %v", err)
|
||||||
|
}
|
||||||
|
vm := testVM("uses-default", existing.ID, "172.16.0.44")
|
||||||
|
if err := db.UpsertVM(context.Background(), vm); err != nil {
|
||||||
|
t.Fatalf("UpsertVM: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d := &Daemon{
|
||||||
|
layout: modelPathsLayoutForTest(dir),
|
||||||
|
store: db,
|
||||||
|
}
|
||||||
|
|
||||||
|
image, err := d.PromoteImage(context.Background(), "default")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("PromoteImage: %v", err)
|
||||||
|
}
|
||||||
|
if !image.Managed {
|
||||||
|
t.Fatal("promoted image should be managed")
|
||||||
|
}
|
||||||
|
if image.ID != existing.ID || image.Name != existing.Name {
|
||||||
|
t.Fatalf("promoted image identity changed: %+v", image)
|
||||||
|
}
|
||||||
|
if !image.CreatedAt.Equal(existing.CreatedAt) {
|
||||||
|
t.Fatalf("CreatedAt = %s, want preserved %s", image.CreatedAt, existing.CreatedAt)
|
||||||
|
}
|
||||||
|
if !image.UpdatedAt.After(existing.UpdatedAt) {
|
||||||
|
t.Fatalf("UpdatedAt = %s, want newer than %s", image.UpdatedAt, existing.UpdatedAt)
|
||||||
|
}
|
||||||
|
wantArtifactDir := filepath.Join(d.layout.ImagesDir, existing.ID)
|
||||||
|
if image.ArtifactDir != wantArtifactDir {
|
||||||
|
t.Fatalf("ArtifactDir = %q, want %q", image.ArtifactDir, wantArtifactDir)
|
||||||
|
}
|
||||||
|
if image.RootfsPath != filepath.Join(wantArtifactDir, "rootfs.ext4") {
|
||||||
|
t.Fatalf("RootfsPath = %q, want managed copy", image.RootfsPath)
|
||||||
|
}
|
||||||
|
if image.WorkSeedPath != filepath.Join(wantArtifactDir, "work-seed.ext4") {
|
||||||
|
t.Fatalf("WorkSeedPath = %q, want managed copy", image.WorkSeedPath)
|
||||||
|
}
|
||||||
|
if image.KernelPath != kernel || image.InitrdPath != initrd || image.ModulesDir != modulesDir || image.PackagesPath != packages {
|
||||||
|
t.Fatalf("boot support paths changed unexpectedly: %+v", image)
|
||||||
|
}
|
||||||
|
|
||||||
|
rootfsContent, err := os.ReadFile(rootfs)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile(rootfs): %v", err)
|
||||||
|
}
|
||||||
|
managedRootfsContent, err := os.ReadFile(image.RootfsPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile(managed rootfs): %v", err)
|
||||||
|
}
|
||||||
|
if !bytes.Equal(managedRootfsContent, rootfsContent) {
|
||||||
|
t.Fatal("managed rootfs copy content mismatch")
|
||||||
|
}
|
||||||
|
managedWorkSeedContent, err := os.ReadFile(image.WorkSeedPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile(managed work seed): %v", err)
|
||||||
|
}
|
||||||
|
if !bytes.Equal(managedWorkSeedContent, workSeedContent) {
|
||||||
|
t.Fatal("managed work seed copy content mismatch")
|
||||||
|
}
|
||||||
|
|
||||||
|
got, err := db.GetImageByName(context.Background(), "default")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GetImageByName: %v", err)
|
||||||
|
}
|
||||||
|
if got.RootfsPath != image.RootfsPath || !got.Managed || got.ArtifactDir != image.ArtifactDir {
|
||||||
|
t.Fatalf("stored promoted image = %+v, want %+v", got, image)
|
||||||
|
}
|
||||||
|
gotVM, err := db.GetVMByID(context.Background(), vm.ID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GetVMByID: %v", err)
|
||||||
|
}
|
||||||
|
if gotVM.ImageID != existing.ID {
|
||||||
|
t.Fatalf("VM image ID = %q, want preserved %q", gotVM.ImageID, existing.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPromoteImageRejectsManagedImage(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
rootfs, kernel, initrd, modulesDir, packages := writeDefaultImageArtifacts(t, dir)
|
||||||
|
db := openDefaultImageStore(t, dir)
|
||||||
|
now := time.Date(2026, time.March, 20, 12, 0, 0, 0, time.UTC)
|
||||||
|
if err := db.UpsertImage(context.Background(), model.Image{
|
||||||
|
ID: "managed-id",
|
||||||
|
Name: "default",
|
||||||
|
Managed: true,
|
||||||
|
ArtifactDir: filepath.Join(dir, "images", "managed-id"),
|
||||||
|
RootfsPath: rootfs,
|
||||||
|
KernelPath: kernel,
|
||||||
|
InitrdPath: initrd,
|
||||||
|
ModulesDir: modulesDir,
|
||||||
|
PackagesPath: packages,
|
||||||
|
CreatedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("UpsertImage: %v", err)
|
||||||
|
}
|
||||||
|
d := &Daemon{
|
||||||
|
layout: modelPathsLayoutForTest(dir),
|
||||||
|
store: db,
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.PromoteImage(context.Background(), "default")
|
||||||
|
if err == nil || !strings.Contains(err.Error(), "already managed") {
|
||||||
|
t.Fatalf("PromoteImage(managed) error = %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPromoteImageSkipsMissingWorkSeed(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
rootfs, kernel, initrd, modulesDir, packages := writeDefaultImageArtifacts(t, dir)
|
||||||
|
db := openDefaultImageStore(t, dir)
|
||||||
|
now := time.Date(2026, time.March, 20, 12, 0, 0, 0, time.UTC)
|
||||||
|
existing := model.Image{
|
||||||
|
ID: "promote-missing-seed",
|
||||||
|
Name: "default",
|
||||||
|
Managed: false,
|
||||||
|
RootfsPath: rootfs,
|
||||||
|
WorkSeedPath: filepath.Join(dir, "missing.work-seed.ext4"),
|
||||||
|
KernelPath: kernel,
|
||||||
|
InitrdPath: initrd,
|
||||||
|
ModulesDir: modulesDir,
|
||||||
|
PackagesPath: packages,
|
||||||
|
CreatedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
|
}
|
||||||
|
if err := db.UpsertImage(context.Background(), existing); err != nil {
|
||||||
|
t.Fatalf("UpsertImage: %v", err)
|
||||||
|
}
|
||||||
|
d := &Daemon{
|
||||||
|
layout: modelPathsLayoutForTest(dir),
|
||||||
|
store: db,
|
||||||
|
}
|
||||||
|
|
||||||
|
image, err := d.PromoteImage(context.Background(), "default")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("PromoteImage: %v", err)
|
||||||
|
}
|
||||||
|
if image.WorkSeedPath != "" {
|
||||||
|
t.Fatalf("WorkSeedPath = %q, want empty for missing source work seed", image.WorkSeedPath)
|
||||||
|
}
|
||||||
|
if _, err := os.Stat(filepath.Join(image.ArtifactDir, "work-seed.ext4")); !os.IsNotExist(err) {
|
||||||
|
t.Fatalf("managed work-seed should not exist, stat error = %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func openDefaultImageStore(t *testing.T, dir string) *store.Store {
|
func openDefaultImageStore(t *testing.T, dir string) *store.Store {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
db, err := store.Open(filepath.Join(dir, "state.db"))
|
db, err := store.Open(filepath.Join(dir, "state.db"))
|
||||||
|
|
@ -405,6 +579,12 @@ func writeDefaultImageArtifacts(t *testing.T, dir string) (rootfs, kernel, initr
|
||||||
return rootfs, kernel, initrd, modulesDir, packages
|
return rootfs, kernel, initrd, modulesDir, packages
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func modelPathsLayoutForTest(dir string) paths.Layout {
|
||||||
|
return paths.Layout{
|
||||||
|
ImagesDir: filepath.Join(dir, "images"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestStartVMDNSFailsWhenAddressBusy(t *testing.T) {
|
func TestStartVMDNSFailsWhenAddressBusy(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -2,12 +2,17 @@ package daemon
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"crypto/rand"
|
||||||
|
"crypto/rsa"
|
||||||
|
"crypto/x509"
|
||||||
|
"encoding/pem"
|
||||||
"errors"
|
"errors"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strconv"
|
"strconv"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"banger/internal/guest"
|
||||||
"banger/internal/model"
|
"banger/internal/model"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -34,7 +39,7 @@ func TestEnsureWorkDiskClonesSeedImageAndResizes(t *testing.T) {
|
||||||
image := testImage("image-seeded")
|
image := testImage("image-seeded")
|
||||||
image.WorkSeedPath = seedPath
|
image.WorkSeedPath = seedPath
|
||||||
|
|
||||||
if err := d.ensureWorkDisk(context.Background(), &vm, image); err != nil {
|
if _, err := d.ensureWorkDisk(context.Background(), &vm, image); err != nil {
|
||||||
t.Fatalf("ensureWorkDisk: %v", err)
|
t.Fatalf("ensureWorkDisk: %v", err)
|
||||||
}
|
}
|
||||||
runner.assertExhausted()
|
runner.assertExhausted()
|
||||||
|
|
@ -90,3 +95,38 @@ func TestTapPoolWarmsAndReusesIdleTap(t *testing.T) {
|
||||||
}
|
}
|
||||||
runner.assertExhausted()
|
runner.assertExhausted()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestEnsureAuthorizedKeyOnWorkDiskSkipsRepairForMatchingSeededFingerprint(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
privateKey, err := rsa.GenerateKey(rand.Reader, 1024)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateKey: %v", err)
|
||||||
|
}
|
||||||
|
privateKeyPEM := pem.EncodeToMemory(&pem.Block{
|
||||||
|
Type: "RSA PRIVATE KEY",
|
||||||
|
Bytes: x509.MarshalPKCS1PrivateKey(privateKey),
|
||||||
|
})
|
||||||
|
sshKeyPath := filepath.Join(t.TempDir(), "id_rsa")
|
||||||
|
if err := os.WriteFile(sshKeyPath, privateKeyPEM, 0o600); err != nil {
|
||||||
|
t.Fatalf("WriteFile(private key): %v", err)
|
||||||
|
}
|
||||||
|
fingerprint, err := guest.AuthorizedPublicKeyFingerprint(sshKeyPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("AuthorizedPublicKeyFingerprint: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
runner := &scriptedRunner{t: t}
|
||||||
|
d := &Daemon{
|
||||||
|
runner: runner,
|
||||||
|
config: model.DaemonConfig{SSHKeyPath: sshKeyPath},
|
||||||
|
}
|
||||||
|
vm := testVM("seeded-fastpath", "image-seeded-fastpath", "172.16.0.62")
|
||||||
|
vm.Runtime.WorkDiskPath = filepath.Join(t.TempDir(), "root.ext4")
|
||||||
|
image := model.Image{SeededSSHPublicKeyFingerprint: fingerprint}
|
||||||
|
|
||||||
|
if err := d.ensureAuthorizedKeyOnWorkDisk(context.Background(), &vm, image, workDiskPreparation{ClonedFromSeed: true}); err != nil {
|
||||||
|
t.Fatalf("ensureAuthorizedKeyOnWorkDisk: %v", err)
|
||||||
|
}
|
||||||
|
runner.assertExhausted()
|
||||||
|
}
|
||||||
|
|
|
||||||
86
internal/daemon/image_seed.go
Normal file
86
internal/daemon/image_seed.go
Normal file
|
|
@ -0,0 +1,86 @@
|
||||||
|
package daemon
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"banger/internal/guest"
|
||||||
|
"banger/internal/model"
|
||||||
|
"banger/internal/system"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (d *Daemon) seedAuthorizedKeyOnExt4Image(ctx context.Context, imagePath string) (string, error) {
|
||||||
|
if strings.TrimSpace(d.config.SSHKeyPath) == "" {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
fingerprint, err := guest.AuthorizedPublicKeyFingerprint(d.config.SSHKeyPath)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("derive authorized ssh key fingerprint: %w", err)
|
||||||
|
}
|
||||||
|
publicKey, err := guest.AuthorizedPublicKey(d.config.SSHKeyPath)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("derive authorized ssh key: %w", err)
|
||||||
|
}
|
||||||
|
mountDir, cleanup, err := system.MountTempDir(ctx, d.runner, imagePath, false)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer cleanup()
|
||||||
|
|
||||||
|
if err := d.flattenNestedWorkHome(ctx, mountDir); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
sshDir := filepath.Join(mountDir, ".ssh")
|
||||||
|
if _, err := d.runner.RunSudo(ctx, "mkdir", "-p", sshDir); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
if _, err := d.runner.RunSudo(ctx, "chmod", "700", sshDir); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
authorizedKeysPath := filepath.Join(sshDir, "authorized_keys")
|
||||||
|
existing, err := d.runner.RunSudo(ctx, "cat", authorizedKeysPath)
|
||||||
|
if err != nil {
|
||||||
|
existing = nil
|
||||||
|
}
|
||||||
|
merged := mergeAuthorizedKey(existing, publicKey)
|
||||||
|
tmpFile, err := os.CreateTemp("", "banger-image-authorized-keys-*")
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
tmpPath := tmpFile.Name()
|
||||||
|
if _, err := tmpFile.Write(merged); err != nil {
|
||||||
|
_ = tmpFile.Close()
|
||||||
|
_ = os.Remove(tmpPath)
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
if err := tmpFile.Close(); err != nil {
|
||||||
|
_ = os.Remove(tmpPath)
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer os.Remove(tmpPath)
|
||||||
|
if _, err := d.runner.RunSudo(ctx, "install", "-m", "600", tmpPath, authorizedKeysPath); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return fingerprint, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Daemon) refreshManagedWorkSeedFingerprint(ctx context.Context, image model.Image, fingerprint string) error {
|
||||||
|
if !image.Managed || strings.TrimSpace(image.WorkSeedPath) == "" || strings.TrimSpace(fingerprint) == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
seededFingerprint, err := d.seedAuthorizedKeyOnExt4Image(ctx, image.WorkSeedPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if seededFingerprint == "" || seededFingerprint == image.SeededSSHPublicKeyFingerprint {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
image.SeededSSHPublicKeyFingerprint = seededFingerprint
|
||||||
|
image.UpdatedAt = model.Now()
|
||||||
|
return d.store.UpsertImage(ctx, image)
|
||||||
|
}
|
||||||
|
|
@ -14,8 +14,10 @@ import (
|
||||||
|
|
||||||
"banger/internal/firecracker"
|
"banger/internal/firecracker"
|
||||||
"banger/internal/guest"
|
"banger/internal/guest"
|
||||||
|
"banger/internal/guestnet"
|
||||||
"banger/internal/hostnat"
|
"banger/internal/hostnat"
|
||||||
"banger/internal/model"
|
"banger/internal/model"
|
||||||
|
"banger/internal/opencode"
|
||||||
"banger/internal/system"
|
"banger/internal/system"
|
||||||
"banger/internal/vsockagent"
|
"banger/internal/vsockagent"
|
||||||
)
|
)
|
||||||
|
|
@ -103,6 +105,10 @@ func (d *Daemon) runImageBuildNative(ctx context.Context, spec imageBuildSpec) (
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer client.Close()
|
defer client.Close()
|
||||||
|
authorizedKey, err := guest.AuthorizedPublicKey(d.config.SSHKeyPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
helperBytes, err := os.ReadFile(d.config.VSockAgentPath)
|
helperBytes, err := os.ReadFile(d.config.VSockAgentPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
@ -117,7 +123,7 @@ func (d *Daemon) runImageBuildNative(ctx context.Context, spec imageBuildSpec) (
|
||||||
if err := writeBuildLog(spec.BuildLog, "configuring guest"); err != nil {
|
if err := writeBuildLog(spec.BuildLog, "configuring guest"); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err := client.RunScript(ctx, buildProvisionScript(vm.Name, d.config.DefaultDNS, packages, spec.InstallDocker), spec.BuildLog); err != nil {
|
if err := client.RunScript(ctx, buildProvisionScript(vm.Name, d.config.DefaultDNS, string(authorizedKey), packages, spec.InstallDocker), spec.BuildLog); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if strings.TrimSpace(spec.ModulesDir) != "" {
|
if strings.TrimSpace(spec.ModulesDir) != "" {
|
||||||
|
|
@ -250,7 +256,7 @@ func (d *Daemon) shutdownImageBuildVM(ctx context.Context, vm imageBuildVM) erro
|
||||||
return d.waitForExit(ctx, vm.PID, vm.APISock, 15*time.Second)
|
return d.waitForExit(ctx, vm.PID, vm.APISock, 15*time.Second)
|
||||||
}
|
}
|
||||||
|
|
||||||
func buildProvisionScript(vmName, dnsServer string, packages []string, installDocker bool) string {
|
func buildProvisionScript(vmName, dnsServer, authorizedKey string, packages []string, installDocker bool) string {
|
||||||
var script bytes.Buffer
|
var script bytes.Buffer
|
||||||
script.WriteString("set -euo pipefail\n")
|
script.WriteString("set -euo pipefail\n")
|
||||||
fmt.Fprintf(&script, "printf 'nameserver %%s\\n' %s > /etc/resolv.conf\n", shellQuote(dnsServer))
|
fmt.Fprintf(&script, "printf 'nameserver %%s\\n' %s > /etc/resolv.conf\n", shellQuote(dnsServer))
|
||||||
|
|
@ -260,11 +266,14 @@ func buildProvisionScript(vmName, dnsServer string, packages []string, installDo
|
||||||
script.WriteString("sed -i '\\|^/dev/vdb[[:space:]]\\+/home[[:space:]]|d; \\|^/dev/vdc[[:space:]]\\+/var[[:space:]]|d' /etc/fstab\n")
|
script.WriteString("sed -i '\\|^/dev/vdb[[:space:]]\\+/home[[:space:]]|d; \\|^/dev/vdc[[:space:]]\\+/var[[:space:]]|d' /etc/fstab\n")
|
||||||
script.WriteString("if ! grep -q '^tmpfs /run ' /etc/fstab; then echo 'tmpfs /run tmpfs defaults,nodev,nosuid,mode=0755 0 0' >> /etc/fstab; fi\n")
|
script.WriteString("if ! grep -q '^tmpfs /run ' /etc/fstab; then echo 'tmpfs /run tmpfs defaults,nodev,nosuid,mode=0755 0 0' >> /etc/fstab; fi\n")
|
||||||
script.WriteString("if ! grep -q '^tmpfs /tmp ' /etc/fstab; then echo 'tmpfs /tmp tmpfs defaults,nodev,nosuid,mode=1777 0 0' >> /etc/fstab; fi\n")
|
script.WriteString("if ! grep -q '^tmpfs /tmp ' /etc/fstab; then echo 'tmpfs /tmp tmpfs defaults,nodev,nosuid,mode=1777 0 0' >> /etc/fstab; fi\n")
|
||||||
|
appendAuthorizedKeySetup(&script, authorizedKey)
|
||||||
script.WriteString("apt-get update\n")
|
script.WriteString("apt-get update\n")
|
||||||
script.WriteString("DEBIAN_FRONTEND=noninteractive apt-get -y upgrade\n")
|
script.WriteString("DEBIAN_FRONTEND=noninteractive apt-get -y upgrade\n")
|
||||||
fmt.Fprintf(&script, "PACKAGES=%s\n", shellArray(packages))
|
fmt.Fprintf(&script, "PACKAGES=%s\n", shellArray(packages))
|
||||||
script.WriteString("DEBIAN_FRONTEND=noninteractive apt-get -y install \"${PACKAGES[@]}\"\n")
|
script.WriteString("DEBIAN_FRONTEND=noninteractive apt-get -y install \"${PACKAGES[@]}\"\n")
|
||||||
|
appendGuestNetworkSetup(&script)
|
||||||
appendMiseSetup(&script)
|
appendMiseSetup(&script)
|
||||||
|
appendOpenCodeServiceSetup(&script)
|
||||||
appendTmuxSetup(&script)
|
appendTmuxSetup(&script)
|
||||||
appendVSockPingSetup(&script)
|
appendVSockPingSetup(&script)
|
||||||
if installDocker {
|
if installDocker {
|
||||||
|
|
@ -279,6 +288,15 @@ func buildProvisionScript(vmName, dnsServer string, packages []string, installDo
|
||||||
return script.String()
|
return script.String()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func appendAuthorizedKeySetup(script *bytes.Buffer, authorizedKey string) {
|
||||||
|
script.WriteString("mkdir -p /root/.ssh\n")
|
||||||
|
script.WriteString("chmod 700 /root/.ssh\n")
|
||||||
|
script.WriteString("cat > /root/.ssh/authorized_keys <<'EOF'\n")
|
||||||
|
script.WriteString(strings.TrimSpace(authorizedKey))
|
||||||
|
script.WriteString("\nEOF\n")
|
||||||
|
script.WriteString("chmod 600 /root/.ssh/authorized_keys\n")
|
||||||
|
}
|
||||||
|
|
||||||
func buildModulesCommand(modulesBase string) string {
|
func buildModulesCommand(modulesBase string) string {
|
||||||
return fmt.Sprintf("bash -se <<'EOF'\nset -euo pipefail\nmkdir -p /lib/modules\ntar -C /lib/modules -xf -\ndepmod -a %s\nmkdir -p /etc/modules-load.d\nprintf 'nf_tables\\nnft_chain_nat\\nveth\\nbr_netfilter\\noverlay\\n' > /etc/modules-load.d/docker-netfilter.conf\nmkdir -p /etc/sysctl.d\ncat > /etc/sysctl.d/99-docker.conf <<'SYSCTL'\nnet.bridge.bridge-nf-call-iptables = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.ipv4.ip_forward = 1\nSYSCTL\nsysctl --system >/dev/null 2>&1 || true\nEOF", shellQuote(modulesBase))
|
return fmt.Sprintf("bash -se <<'EOF'\nset -euo pipefail\nmkdir -p /lib/modules\ntar -C /lib/modules -xf -\ndepmod -a %s\nmkdir -p /etc/modules-load.d\nprintf 'nf_tables\\nnft_chain_nat\\nveth\\nbr_netfilter\\noverlay\\n' > /etc/modules-load.d/docker-netfilter.conf\nmkdir -p /etc/sysctl.d\ncat > /etc/sysctl.d/99-docker.conf <<'SYSCTL'\nnet.bridge.bridge-nf-call-iptables = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.ipv4.ip_forward = 1\nSYSCTL\nsysctl --system >/dev/null 2>&1 || true\nEOF", shellQuote(modulesBase))
|
||||||
}
|
}
|
||||||
|
|
@ -286,6 +304,9 @@ func buildModulesCommand(modulesBase string) string {
|
||||||
func appendMiseSetup(script *bytes.Buffer) {
|
func appendMiseSetup(script *bytes.Buffer) {
|
||||||
fmt.Fprintf(script, "curl -fsSL https://mise.run | MISE_INSTALL_PATH=%s MISE_VERSION=%s sh\n", shellQuote(defaultMiseInstallPath), shellQuote(defaultMiseVersion))
|
fmt.Fprintf(script, "curl -fsSL https://mise.run | MISE_INSTALL_PATH=%s MISE_VERSION=%s sh\n", shellQuote(defaultMiseInstallPath), shellQuote(defaultMiseVersion))
|
||||||
fmt.Fprintf(script, "%s use -g %s\n", shellQuote(defaultMiseInstallPath), shellQuote(defaultOpenCodeTool))
|
fmt.Fprintf(script, "%s use -g %s\n", shellQuote(defaultMiseInstallPath), shellQuote(defaultOpenCodeTool))
|
||||||
|
fmt.Fprintf(script, "%s reshim\n", shellQuote(defaultMiseInstallPath))
|
||||||
|
fmt.Fprintf(script, "if [[ ! -e %s ]]; then echo 'opencode shim not found after mise install' >&2; exit 1; fi\n", shellQuote(opencode.ShimPath))
|
||||||
|
fmt.Fprintf(script, "ln -snf %s %s\n", shellQuote(opencode.ShimPath), shellQuote(opencode.GuestBinaryPath))
|
||||||
script.WriteString("mkdir -p /etc/profile.d\n")
|
script.WriteString("mkdir -p /etc/profile.d\n")
|
||||||
script.WriteString("cat > /etc/profile.d/mise.sh <<'EOF'\n")
|
script.WriteString("cat > /etc/profile.d/mise.sh <<'EOF'\n")
|
||||||
fmt.Fprintf(script, "if [ -n \"${BASH_VERSION:-}\" ] && [ -x %s ]; then\n", shellQuote(defaultMiseInstallPath))
|
fmt.Fprintf(script, "if [ -n \"${BASH_VERSION:-}\" ] && [ -x %s ]; then\n", shellQuote(defaultMiseInstallPath))
|
||||||
|
|
@ -296,6 +317,28 @@ func appendMiseSetup(script *bytes.Buffer) {
|
||||||
appendLineIfMissing(script, "/etc/bash.bashrc", defaultMiseActivateLine)
|
appendLineIfMissing(script, "/etc/bash.bashrc", defaultMiseActivateLine)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func appendGuestNetworkSetup(script *bytes.Buffer) {
|
||||||
|
script.WriteString("mkdir -p /usr/local/libexec /etc/systemd/system\n")
|
||||||
|
script.WriteString("cat > " + guestnet.GuestScriptPath + " <<'EOF'\n")
|
||||||
|
script.WriteString(guestnet.BootstrapScript())
|
||||||
|
script.WriteString("EOF\n")
|
||||||
|
script.WriteString("chmod 0755 " + guestnet.GuestScriptPath + "\n")
|
||||||
|
script.WriteString("cat > /etc/systemd/system/" + guestnet.SystemdServiceName + " <<'EOF'\n")
|
||||||
|
script.WriteString(guestnet.SystemdServiceUnit())
|
||||||
|
script.WriteString("EOF\n")
|
||||||
|
script.WriteString("chmod 0644 /etc/systemd/system/" + guestnet.SystemdServiceName + "\n")
|
||||||
|
script.WriteString("if command -v systemctl >/dev/null 2>&1; then systemctl daemon-reload || true; systemctl enable --now " + guestnet.SystemdServiceName + " || true; fi\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
func appendOpenCodeServiceSetup(script *bytes.Buffer) {
|
||||||
|
script.WriteString("mkdir -p /etc/systemd/system\n")
|
||||||
|
script.WriteString("cat > /etc/systemd/system/" + opencode.ServiceName + " <<'EOF'\n")
|
||||||
|
script.WriteString(opencode.ServiceUnit())
|
||||||
|
script.WriteString("EOF\n")
|
||||||
|
script.WriteString("chmod 0644 /etc/systemd/system/" + opencode.ServiceName + "\n")
|
||||||
|
script.WriteString("if command -v systemctl >/dev/null 2>&1; then systemctl daemon-reload || true; systemctl enable --now " + opencode.ServiceName + " || true; fi\n")
|
||||||
|
}
|
||||||
|
|
||||||
func appendTmuxSetup(script *bytes.Buffer) {
|
func appendTmuxSetup(script *bytes.Buffer) {
|
||||||
fmt.Fprintf(script, "TMUX_PLUGIN_DIR=%s\n", shellQuote(defaultTMUXPluginDir))
|
fmt.Fprintf(script, "TMUX_PLUGIN_DIR=%s\n", shellQuote(defaultTMUXPluginDir))
|
||||||
fmt.Fprintf(script, "TMUX_RESURRECT_DIR=%s\n", shellQuote(defaultTMUXResurrectDir))
|
fmt.Fprintf(script, "TMUX_RESURRECT_DIR=%s\n", shellQuote(defaultTMUXResurrectDir))
|
||||||
|
|
|
||||||
|
|
@ -8,14 +8,28 @@ import (
|
||||||
func TestBuildProvisionScriptInstallsDefaultTools(t *testing.T) {
|
func TestBuildProvisionScriptInstallsDefaultTools(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
script := buildProvisionScript("devbox", "1.1.1.1", []string{"git", "curl"}, false)
|
script := buildProvisionScript("devbox", "1.1.1.1", "ssh-ed25519 AAAATESTKEY banger", []string{"git", "curl"}, false)
|
||||||
for _, snippet := range []string{
|
for _, snippet := range []string{
|
||||||
|
"mkdir -p /root/.ssh",
|
||||||
|
"cat > /root/.ssh/authorized_keys <<'EOF'",
|
||||||
|
"ssh-ed25519 AAAATESTKEY banger",
|
||||||
|
"cat > /usr/local/libexec/banger-network-bootstrap <<'EOF'",
|
||||||
|
"ip addr replace \"$guest_ip/$prefix\" dev \"$iface\"",
|
||||||
|
"cat > /etc/systemd/system/banger-network.service <<'EOF'",
|
||||||
|
"systemctl enable --now banger-network.service || true",
|
||||||
"curl -fsSL https://mise.run | MISE_INSTALL_PATH='/usr/local/bin/mise' MISE_VERSION='v2025.12.0' sh",
|
"curl -fsSL https://mise.run | MISE_INSTALL_PATH='/usr/local/bin/mise' MISE_VERSION='v2025.12.0' sh",
|
||||||
"'/usr/local/bin/mise' use -g 'github:anomalyco/opencode'",
|
"'/usr/local/bin/mise' use -g 'github:anomalyco/opencode'",
|
||||||
|
"'/usr/local/bin/mise' reshim",
|
||||||
|
"if [[ ! -e '/root/.local/share/mise/shims/opencode' ]]; then echo 'opencode shim not found after mise install' >&2; exit 1; fi",
|
||||||
|
"ln -snf '/root/.local/share/mise/shims/opencode' '/usr/local/bin/opencode'",
|
||||||
"cat > /etc/profile.d/mise.sh <<'EOF'",
|
"cat > /etc/profile.d/mise.sh <<'EOF'",
|
||||||
"if [ -n \"${BASH_VERSION:-}\" ] && [ -x '/usr/local/bin/mise' ]; then",
|
"if [ -n \"${BASH_VERSION:-}\" ] && [ -x '/usr/local/bin/mise' ]; then",
|
||||||
`eval "$(/usr/local/bin/mise activate bash)"`,
|
`eval "$(/usr/local/bin/mise activate bash)"`,
|
||||||
`if ! grep -Fqx 'eval "$(/usr/local/bin/mise activate bash)"' '/etc/bash.bashrc'; then`,
|
`if ! grep -Fqx 'eval "$(/usr/local/bin/mise activate bash)"' '/etc/bash.bashrc'; then`,
|
||||||
|
"cat > /etc/systemd/system/banger-opencode.service <<'EOF'",
|
||||||
|
"RequiresMountsFor=/root",
|
||||||
|
"ExecStart=/usr/local/bin/opencode serve --hostname 0.0.0.0 --port 4096",
|
||||||
|
"systemctl enable --now banger-opencode.service || true",
|
||||||
`git clone --depth 1 'https://github.com/tmux-plugins/tpm' "$TMUX_PLUGIN_DIR/tpm"`,
|
`git clone --depth 1 'https://github.com/tmux-plugins/tpm' "$TMUX_PLUGIN_DIR/tpm"`,
|
||||||
`git clone --depth 1 'https://github.com/tmux-plugins/tmux-resurrect' "$TMUX_PLUGIN_DIR/tmux-resurrect"`,
|
`git clone --depth 1 'https://github.com/tmux-plugins/tmux-resurrect' "$TMUX_PLUGIN_DIR/tmux-resurrect"`,
|
||||||
`git clone --depth 1 'https://github.com/tmux-plugins/tmux-continuum' "$TMUX_PLUGIN_DIR/tmux-continuum"`,
|
`git clone --depth 1 'https://github.com/tmux-plugins/tmux-continuum' "$TMUX_PLUGIN_DIR/tmux-continuum"`,
|
||||||
|
|
|
||||||
|
|
@ -103,26 +103,33 @@ func (d *Daemon) BuildImage(ctx context.Context, params api.ImageBuildParams) (i
|
||||||
_ = os.RemoveAll(artifactDir)
|
_ = os.RemoveAll(artifactDir)
|
||||||
return model.Image{}, err
|
return model.Image{}, err
|
||||||
}
|
}
|
||||||
|
seededSSHPublicKeyFingerprint, err := d.seedAuthorizedKeyOnExt4Image(ctx, workSeedPath)
|
||||||
|
if err != nil {
|
||||||
|
_ = logFile.Sync()
|
||||||
|
_ = os.RemoveAll(artifactDir)
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
if err := writePackagesMetadata(rootfsPath, d.config.DefaultPackagesFile); err != nil {
|
if err := writePackagesMetadata(rootfsPath, d.config.DefaultPackagesFile); err != nil {
|
||||||
_ = logFile.Sync()
|
_ = logFile.Sync()
|
||||||
_ = os.RemoveAll(artifactDir)
|
_ = os.RemoveAll(artifactDir)
|
||||||
return model.Image{}, err
|
return model.Image{}, err
|
||||||
}
|
}
|
||||||
image = model.Image{
|
image = model.Image{
|
||||||
ID: id,
|
ID: id,
|
||||||
Name: name,
|
Name: name,
|
||||||
Managed: true,
|
Managed: true,
|
||||||
ArtifactDir: artifactDir,
|
ArtifactDir: artifactDir,
|
||||||
RootfsPath: rootfsPath,
|
RootfsPath: rootfsPath,
|
||||||
WorkSeedPath: workSeedPath,
|
WorkSeedPath: workSeedPath,
|
||||||
KernelPath: kernelPath,
|
KernelPath: kernelPath,
|
||||||
InitrdPath: initrdPath,
|
InitrdPath: initrdPath,
|
||||||
ModulesDir: modulesDir,
|
ModulesDir: modulesDir,
|
||||||
PackagesPath: d.config.DefaultPackagesFile,
|
PackagesPath: d.config.DefaultPackagesFile,
|
||||||
BuildSize: params.Size,
|
BuildSize: params.Size,
|
||||||
Docker: params.Docker,
|
SeededSSHPublicKeyFingerprint: seededSSHPublicKeyFingerprint,
|
||||||
CreatedAt: now,
|
Docker: params.Docker,
|
||||||
UpdatedAt: now,
|
CreatedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
}
|
}
|
||||||
if err := d.store.UpsertImage(ctx, image); err != nil {
|
if err := d.store.UpsertImage(ctx, image); err != nil {
|
||||||
return model.Image{}, err
|
return model.Image{}, err
|
||||||
|
|
@ -220,6 +227,105 @@ func (d *Daemon) RegisterImage(ctx context.Context, params api.ImageRegisterPara
|
||||||
return image, nil
|
return image, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (d *Daemon) PromoteImage(ctx context.Context, idOrName string) (image model.Image, err error) {
|
||||||
|
d.mu.Lock()
|
||||||
|
defer d.mu.Unlock()
|
||||||
|
|
||||||
|
op := d.beginOperation("image.promote")
|
||||||
|
defer func() {
|
||||||
|
if err != nil {
|
||||||
|
op.fail(err, imageLogAttrs(image)...)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
op.done(imageLogAttrs(image)...)
|
||||||
|
}()
|
||||||
|
|
||||||
|
image, err = d.FindImage(ctx, idOrName)
|
||||||
|
if err != nil {
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
if image.Managed {
|
||||||
|
return model.Image{}, fmt.Errorf("image %s is already managed", image.Name)
|
||||||
|
}
|
||||||
|
if err := validateImagePromotePaths(image.RootfsPath, image.KernelPath, image.InitrdPath, image.ModulesDir, image.PackagesPath); err != nil {
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(d.layout.ImagesDir) == "" {
|
||||||
|
return model.Image{}, errors.New("images dir is not configured")
|
||||||
|
}
|
||||||
|
if err := os.MkdirAll(d.layout.ImagesDir, 0o755); err != nil {
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
artifactDir := filepath.Join(d.layout.ImagesDir, image.ID)
|
||||||
|
if _, statErr := os.Stat(artifactDir); statErr == nil {
|
||||||
|
return model.Image{}, fmt.Errorf("artifact dir already exists: %s", artifactDir)
|
||||||
|
} else if !os.IsNotExist(statErr) {
|
||||||
|
return model.Image{}, statErr
|
||||||
|
}
|
||||||
|
|
||||||
|
stageDir, err := os.MkdirTemp(d.layout.ImagesDir, image.ID+".promote-")
|
||||||
|
if err != nil {
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
cleanupStage := true
|
||||||
|
defer func() {
|
||||||
|
if cleanupStage {
|
||||||
|
_ = os.RemoveAll(stageDir)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
rootfsPath := filepath.Join(stageDir, "rootfs.ext4")
|
||||||
|
op.stage("copy_rootfs", "source_rootfs_path", image.RootfsPath, "target_rootfs_path", rootfsPath)
|
||||||
|
if err := system.CopyFilePreferClone(image.RootfsPath, rootfsPath); err != nil {
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
workSeedPath := ""
|
||||||
|
if image.WorkSeedPath != "" {
|
||||||
|
if _, statErr := os.Stat(image.WorkSeedPath); statErr != nil {
|
||||||
|
if os.IsNotExist(statErr) {
|
||||||
|
op.stage("skip_missing_work_seed", "source_work_seed_path", image.WorkSeedPath)
|
||||||
|
image.WorkSeedPath = ""
|
||||||
|
} else {
|
||||||
|
return model.Image{}, statErr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if image.WorkSeedPath != "" {
|
||||||
|
workSeedPath = filepath.Join(stageDir, "work-seed.ext4")
|
||||||
|
op.stage("copy_work_seed", "source_work_seed_path", image.WorkSeedPath, "target_work_seed_path", workSeedPath)
|
||||||
|
if err := system.CopyFilePreferClone(image.WorkSeedPath, workSeedPath); err != nil {
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
image.SeededSSHPublicKeyFingerprint, err = d.seedAuthorizedKeyOnExt4Image(ctx, workSeedPath)
|
||||||
|
if err != nil {
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
image.SeededSSHPublicKeyFingerprint = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
op.stage("activate_artifacts", "artifact_dir", artifactDir)
|
||||||
|
if err := os.Rename(stageDir, artifactDir); err != nil {
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
cleanupStage = false
|
||||||
|
|
||||||
|
image.Managed = true
|
||||||
|
image.ArtifactDir = artifactDir
|
||||||
|
image.RootfsPath = filepath.Join(artifactDir, "rootfs.ext4")
|
||||||
|
if workSeedPath != "" {
|
||||||
|
image.WorkSeedPath = filepath.Join(artifactDir, "work-seed.ext4")
|
||||||
|
}
|
||||||
|
image.UpdatedAt = model.Now()
|
||||||
|
if err := d.store.UpsertImage(ctx, image); err != nil {
|
||||||
|
_ = os.RemoveAll(artifactDir)
|
||||||
|
return model.Image{}, err
|
||||||
|
}
|
||||||
|
return image, nil
|
||||||
|
}
|
||||||
|
|
||||||
func validateImageRegisterPaths(rootfsPath, workSeedPath, kernelPath, initrdPath, modulesDir, packagesPath string) error {
|
func validateImageRegisterPaths(rootfsPath, workSeedPath, kernelPath, initrdPath, modulesDir, packagesPath string) error {
|
||||||
checks := system.NewPreflight()
|
checks := system.NewPreflight()
|
||||||
checks.RequireFile(rootfsPath, "rootfs image", `pass --rootfs <path>`)
|
checks.RequireFile(rootfsPath, "rootfs image", `pass --rootfs <path>`)
|
||||||
|
|
@ -239,6 +345,22 @@ func validateImageRegisterPaths(rootfsPath, workSeedPath, kernelPath, initrdPath
|
||||||
return checks.Err("image register failed")
|
return checks.Err("image register failed")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func validateImagePromotePaths(rootfsPath, kernelPath, initrdPath, modulesDir, packagesPath string) error {
|
||||||
|
checks := system.NewPreflight()
|
||||||
|
checks.RequireFile(rootfsPath, "rootfs image", `re-register the image with a valid rootfs`)
|
||||||
|
checks.RequireFile(kernelPath, "kernel image", `re-register the image with a valid kernel`)
|
||||||
|
if initrdPath != "" {
|
||||||
|
checks.RequireFile(initrdPath, "initrd image", `re-register the image with a valid initrd`)
|
||||||
|
}
|
||||||
|
if modulesDir != "" {
|
||||||
|
checks.RequireDir(modulesDir, "kernel modules dir", `re-register the image with a valid modules dir`)
|
||||||
|
}
|
||||||
|
if packagesPath != "" {
|
||||||
|
checks.RequireFile(packagesPath, "packages manifest", `re-register the image with a valid packages manifest`)
|
||||||
|
}
|
||||||
|
return checks.Err("image promote failed")
|
||||||
|
}
|
||||||
|
|
||||||
func writePackagesMetadata(rootfsPath, packagesPath string) error {
|
func writePackagesMetadata(rootfsPath, packagesPath string) error {
|
||||||
if rootfsPath == "" || packagesPath == "" {
|
if rootfsPath == "" || packagesPath == "" {
|
||||||
return nil
|
return nil
|
||||||
|
|
|
||||||
18
internal/daemon/opencode.go
Normal file
18
internal/daemon/opencode.go
Normal file
|
|
@ -0,0 +1,18 @@
|
||||||
|
package daemon
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"banger/internal/model"
|
||||||
|
"banger/internal/opencode"
|
||||||
|
)
|
||||||
|
|
||||||
|
type opencodeCapability struct{}
|
||||||
|
|
||||||
|
func (opencodeCapability) Name() string { return "opencode" }
|
||||||
|
|
||||||
|
func (opencodeCapability) PostStart(ctx context.Context, d *Daemon, vm model.VMRecord, _ model.Image) error {
|
||||||
|
return opencode.WaitReady(ctx, d.logger, vm.Runtime.VSockPath, func(stage, detail string) {
|
||||||
|
vmCreateStage(ctx, stage, detail)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
@ -49,10 +49,12 @@ func (d *Daemon) CreateVM(ctx context.Context, params api.VMCreateParams) (vm mo
|
||||||
if imageName == "" {
|
if imageName == "" {
|
||||||
imageName = d.config.DefaultImageName
|
imageName = d.config.DefaultImageName
|
||||||
}
|
}
|
||||||
|
vmCreateStage(ctx, "resolve_image", "resolving image")
|
||||||
image, err := d.FindImage(ctx, imageName)
|
image, err := d.FindImage(ctx, imageName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return model.VMRecord{}, err
|
return model.VMRecord{}, err
|
||||||
}
|
}
|
||||||
|
vmCreateStage(ctx, "resolve_image", "using image "+image.Name)
|
||||||
op.stage("image_resolved", imageLogAttrs(image)...)
|
op.stage("image_resolved", imageLogAttrs(image)...)
|
||||||
name := strings.TrimSpace(params.Name)
|
name := strings.TrimSpace(params.Name)
|
||||||
if name == "" {
|
if name == "" {
|
||||||
|
|
@ -126,6 +128,8 @@ func (d *Daemon) CreateVM(ctx context.Context, params api.VMCreateParams) (vm mo
|
||||||
MetricsPath: filepath.Join(vmDir, "metrics.json"),
|
MetricsPath: filepath.Join(vmDir, "metrics.json"),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
vmCreateBindVM(ctx, vm)
|
||||||
|
vmCreateStage(ctx, "reserve_vm", fmt.Sprintf("allocated %s (%s)", vm.Name, vm.Runtime.GuestIP))
|
||||||
if err := d.store.UpsertVM(ctx, vm); err != nil {
|
if err := d.store.UpsertVM(ctx, vm); err != nil {
|
||||||
return model.VMRecord{}, err
|
return model.VMRecord{}, err
|
||||||
}
|
}
|
||||||
|
|
@ -168,6 +172,7 @@ func (d *Daemon) startVMLocked(ctx context.Context, vm model.VMRecord, image mod
|
||||||
op.done(vmLogAttrs(vm)...)
|
op.done(vmLogAttrs(vm)...)
|
||||||
}()
|
}()
|
||||||
op.stage("preflight")
|
op.stage("preflight")
|
||||||
|
vmCreateStage(ctx, "preflight", "checking host prerequisites")
|
||||||
if err := d.validateStartPrereqs(ctx, vm, image); err != nil {
|
if err := d.validateStartPrereqs(ctx, vm, image); err != nil {
|
||||||
return model.VMRecord{}, err
|
return model.VMRecord{}, err
|
||||||
}
|
}
|
||||||
|
|
@ -209,11 +214,13 @@ func (d *Daemon) startVMLocked(ctx context.Context, vm model.VMRecord, image mod
|
||||||
}
|
}
|
||||||
|
|
||||||
op.stage("system_overlay", "overlay_path", vm.Runtime.SystemOverlay)
|
op.stage("system_overlay", "overlay_path", vm.Runtime.SystemOverlay)
|
||||||
|
vmCreateStage(ctx, "prepare_rootfs", "preparing system overlay")
|
||||||
if err := d.ensureSystemOverlay(ctx, &vm); err != nil {
|
if err := d.ensureSystemOverlay(ctx, &vm); err != nil {
|
||||||
return model.VMRecord{}, err
|
return model.VMRecord{}, err
|
||||||
}
|
}
|
||||||
|
|
||||||
op.stage("dm_snapshot", "dm_name", dmName)
|
op.stage("dm_snapshot", "dm_name", dmName)
|
||||||
|
vmCreateStage(ctx, "prepare_rootfs", "creating root filesystem snapshot")
|
||||||
handles, err := d.createDMSnapshot(ctx, image.RootfsPath, vm.Runtime.SystemOverlay, dmName)
|
handles, err := d.createDMSnapshot(ctx, image.RootfsPath, vm.Runtime.SystemOverlay, dmName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return model.VMRecord{}, err
|
return model.VMRecord{}, err
|
||||||
|
|
@ -241,10 +248,12 @@ func (d *Daemon) startVMLocked(ctx context.Context, vm model.VMRecord, image mod
|
||||||
}
|
}
|
||||||
|
|
||||||
op.stage("patch_root_overlay")
|
op.stage("patch_root_overlay")
|
||||||
|
vmCreateStage(ctx, "prepare_rootfs", "writing guest configuration")
|
||||||
if err := d.patchRootOverlay(ctx, vm, image); err != nil {
|
if err := d.patchRootOverlay(ctx, vm, image); err != nil {
|
||||||
return cleanupOnErr(err)
|
return cleanupOnErr(err)
|
||||||
}
|
}
|
||||||
op.stage("prepare_host_features")
|
op.stage("prepare_host_features")
|
||||||
|
vmCreateStage(ctx, "prepare_host_features", "preparing host-side vm features")
|
||||||
if err := d.prepareCapabilityHosts(ctx, &vm, image); err != nil {
|
if err := d.prepareCapabilityHosts(ctx, &vm, image); err != nil {
|
||||||
return cleanupOnErr(err)
|
return cleanupOnErr(err)
|
||||||
}
|
}
|
||||||
|
|
@ -265,6 +274,7 @@ func (d *Daemon) startVMLocked(ctx context.Context, vm model.VMRecord, image mod
|
||||||
return cleanupOnErr(err)
|
return cleanupOnErr(err)
|
||||||
}
|
}
|
||||||
op.stage("firecracker_launch", "log_path", vm.Runtime.LogPath, "metrics_path", vm.Runtime.MetricsPath)
|
op.stage("firecracker_launch", "log_path", vm.Runtime.LogPath, "metrics_path", vm.Runtime.MetricsPath)
|
||||||
|
vmCreateStage(ctx, "boot_firecracker", "starting firecracker")
|
||||||
firecrackerCtx := context.Background()
|
firecrackerCtx := context.Background()
|
||||||
machineConfig := firecracker.MachineConfig{
|
machineConfig := firecracker.MachineConfig{
|
||||||
BinaryPath: fcPath,
|
BinaryPath: fcPath,
|
||||||
|
|
@ -304,15 +314,18 @@ func (d *Daemon) startVMLocked(ctx context.Context, vm model.VMRecord, image mod
|
||||||
return cleanupOnErr(err)
|
return cleanupOnErr(err)
|
||||||
}
|
}
|
||||||
op.stage("vsock_access", "vsock_path", vm.Runtime.VSockPath, "vsock_cid", vm.Runtime.VSockCID)
|
op.stage("vsock_access", "vsock_path", vm.Runtime.VSockPath, "vsock_cid", vm.Runtime.VSockCID)
|
||||||
|
vmCreateStage(ctx, "wait_vsock_agent", "waiting for guest vsock agent")
|
||||||
if err := d.ensureSocketAccess(ctx, vm.Runtime.VSockPath, "firecracker vsock socket"); err != nil {
|
if err := d.ensureSocketAccess(ctx, vm.Runtime.VSockPath, "firecracker vsock socket"); err != nil {
|
||||||
return cleanupOnErr(err)
|
return cleanupOnErr(err)
|
||||||
}
|
}
|
||||||
op.stage("post_start_features")
|
op.stage("post_start_features")
|
||||||
|
vmCreateStage(ctx, "wait_guest_ready", "waiting for guest services")
|
||||||
if err := d.postStartCapabilities(ctx, vm, image); err != nil {
|
if err := d.postStartCapabilities(ctx, vm, image); err != nil {
|
||||||
return cleanupOnErr(err)
|
return cleanupOnErr(err)
|
||||||
}
|
}
|
||||||
system.TouchNow(&vm)
|
system.TouchNow(&vm)
|
||||||
op.stage("persist")
|
op.stage("persist")
|
||||||
|
vmCreateStage(ctx, "finalize", "saving vm state")
|
||||||
if err := d.store.UpsertVM(ctx, vm); err != nil {
|
if err := d.store.UpsertVM(ctx, vm); err != nil {
|
||||||
return cleanupOnErr(err)
|
return cleanupOnErr(err)
|
||||||
}
|
}
|
||||||
|
|
@ -777,58 +790,75 @@ func (d *Daemon) patchRootOverlay(ctx context.Context, vm model.VMRecord, image
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Daemon) ensureWorkDisk(ctx context.Context, vm *model.VMRecord, image model.Image) error {
|
type workDiskPreparation struct {
|
||||||
|
ClonedFromSeed bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Daemon) ensureWorkDisk(ctx context.Context, vm *model.VMRecord, image model.Image) (workDiskPreparation, error) {
|
||||||
if exists(vm.Runtime.WorkDiskPath) {
|
if exists(vm.Runtime.WorkDiskPath) {
|
||||||
return nil
|
return workDiskPreparation{}, nil
|
||||||
}
|
}
|
||||||
if exists(image.WorkSeedPath) {
|
if exists(image.WorkSeedPath) {
|
||||||
|
vmCreateStage(ctx, "prepare_work_disk", "cloning work seed")
|
||||||
if err := system.CopyFilePreferClone(image.WorkSeedPath, vm.Runtime.WorkDiskPath); err != nil {
|
if err := system.CopyFilePreferClone(image.WorkSeedPath, vm.Runtime.WorkDiskPath); err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
seedInfo, err := os.Stat(image.WorkSeedPath)
|
seedInfo, err := os.Stat(image.WorkSeedPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
if vm.Spec.WorkDiskSizeBytes < seedInfo.Size() {
|
if vm.Spec.WorkDiskSizeBytes < seedInfo.Size() {
|
||||||
return fmt.Errorf("requested work disk size %d is smaller than seed image %d", vm.Spec.WorkDiskSizeBytes, seedInfo.Size())
|
return workDiskPreparation{}, fmt.Errorf("requested work disk size %d is smaller than seed image %d", vm.Spec.WorkDiskSizeBytes, seedInfo.Size())
|
||||||
}
|
}
|
||||||
if vm.Spec.WorkDiskSizeBytes > seedInfo.Size() {
|
if vm.Spec.WorkDiskSizeBytes > seedInfo.Size() {
|
||||||
|
vmCreateStage(ctx, "prepare_work_disk", "resizing work disk")
|
||||||
if err := system.ResizeExt4Image(ctx, d.runner, vm.Runtime.WorkDiskPath, vm.Spec.WorkDiskSizeBytes); err != nil {
|
if err := system.ResizeExt4Image(ctx, d.runner, vm.Runtime.WorkDiskPath, vm.Spec.WorkDiskSizeBytes); err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return workDiskPreparation{ClonedFromSeed: true}, nil
|
||||||
}
|
}
|
||||||
|
vmCreateStage(ctx, "prepare_work_disk", "creating empty work disk")
|
||||||
if _, err := d.runner.Run(ctx, "truncate", "-s", strconv.FormatInt(vm.Spec.WorkDiskSizeBytes, 10), vm.Runtime.WorkDiskPath); err != nil {
|
if _, err := d.runner.Run(ctx, "truncate", "-s", strconv.FormatInt(vm.Spec.WorkDiskSizeBytes, 10), vm.Runtime.WorkDiskPath); err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
if _, err := d.runner.Run(ctx, "mkfs.ext4", "-F", vm.Runtime.WorkDiskPath); err != nil {
|
if _, err := d.runner.Run(ctx, "mkfs.ext4", "-F", vm.Runtime.WorkDiskPath); err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
rootMount, cleanupRoot, err := system.MountTempDir(ctx, d.runner, vm.Runtime.DMDev, true)
|
rootMount, cleanupRoot, err := system.MountTempDir(ctx, d.runner, vm.Runtime.DMDev, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
defer cleanupRoot()
|
defer cleanupRoot()
|
||||||
workMount, cleanupWork, err := system.MountTempDir(ctx, d.runner, vm.Runtime.WorkDiskPath, false)
|
workMount, cleanupWork, err := system.MountTempDir(ctx, d.runner, vm.Runtime.WorkDiskPath, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
defer cleanupWork()
|
defer cleanupWork()
|
||||||
|
vmCreateStage(ctx, "prepare_work_disk", "copying /root into work disk")
|
||||||
if err := system.CopyDirContents(ctx, d.runner, filepath.Join(rootMount, "root"), workMount, true); err != nil {
|
if err := system.CopyDirContents(ctx, d.runner, filepath.Join(rootMount, "root"), workMount, true); err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
if err := d.flattenNestedWorkHome(ctx, workMount); err != nil {
|
if err := d.flattenNestedWorkHome(ctx, workMount); err != nil {
|
||||||
return err
|
return workDiskPreparation{}, err
|
||||||
}
|
}
|
||||||
return nil
|
return workDiskPreparation{}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Daemon) ensureAuthorizedKeyOnWorkDisk(ctx context.Context, vm *model.VMRecord) error {
|
func (d *Daemon) ensureAuthorizedKeyOnWorkDisk(ctx context.Context, vm *model.VMRecord, image model.Image, prep workDiskPreparation) error {
|
||||||
|
fingerprint, err := guest.AuthorizedPublicKeyFingerprint(d.config.SSHKeyPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("derive authorized ssh key fingerprint: %w", err)
|
||||||
|
}
|
||||||
|
if prep.ClonedFromSeed && image.SeededSSHPublicKeyFingerprint != "" && image.SeededSSHPublicKeyFingerprint == fingerprint {
|
||||||
|
vmCreateStage(ctx, "prepare_work_disk", "using seeded SSH access")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
publicKey, err := guest.AuthorizedPublicKey(d.config.SSHKeyPath)
|
publicKey, err := guest.AuthorizedPublicKey(d.config.SSHKeyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("derive authorized ssh key: %w", err)
|
return fmt.Errorf("derive authorized ssh key: %w", err)
|
||||||
}
|
}
|
||||||
|
vmCreateStage(ctx, "prepare_work_disk", "repairing SSH access on work disk")
|
||||||
workMount, cleanupWork, err := system.MountTempDir(ctx, d.runner, vm.Runtime.WorkDiskPath, false)
|
workMount, cleanupWork, err := system.MountTempDir(ctx, d.runner, vm.Runtime.WorkDiskPath, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|
@ -873,6 +903,12 @@ func (d *Daemon) ensureAuthorizedKeyOnWorkDisk(ctx context.Context, vm *model.VM
|
||||||
if _, err := d.runner.RunSudo(ctx, "install", "-m", "600", tmpPath, authorizedKeysPath); err != nil {
|
if _, err := d.runner.RunSudo(ctx, "install", "-m", "600", tmpPath, authorizedKeysPath); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if prep.ClonedFromSeed && image.Managed {
|
||||||
|
vmCreateStage(ctx, "prepare_work_disk", "refreshing managed work seed")
|
||||||
|
if err := d.refreshManagedWorkSeedFingerprint(ctx, image, fingerprint); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
205
internal/daemon/vm_create_ops.go
Normal file
205
internal/daemon/vm_create_ops.go
Normal file
|
|
@ -0,0 +1,205 @@
|
||||||
|
package daemon
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"banger/internal/api"
|
||||||
|
"banger/internal/model"
|
||||||
|
)
|
||||||
|
|
||||||
|
type vmCreateProgressKey struct{}
|
||||||
|
|
||||||
|
type vmCreateOperationState struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
cancel context.CancelFunc
|
||||||
|
op api.VMCreateOperation
|
||||||
|
}
|
||||||
|
|
||||||
|
func newVMCreateOperationState() (*vmCreateOperationState, error) {
|
||||||
|
id, err := model.NewID()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
now := model.Now()
|
||||||
|
return &vmCreateOperationState{
|
||||||
|
op: api.VMCreateOperation{
|
||||||
|
ID: id,
|
||||||
|
Stage: "queued",
|
||||||
|
Detail: "waiting to start",
|
||||||
|
StartedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func withVMCreateProgress(ctx context.Context, op *vmCreateOperationState) context.Context {
|
||||||
|
if op == nil {
|
||||||
|
return ctx
|
||||||
|
}
|
||||||
|
return context.WithValue(ctx, vmCreateProgressKey{}, op)
|
||||||
|
}
|
||||||
|
|
||||||
|
func vmCreateProgressFromContext(ctx context.Context) *vmCreateOperationState {
|
||||||
|
if ctx == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
op, _ := ctx.Value(vmCreateProgressKey{}).(*vmCreateOperationState)
|
||||||
|
return op
|
||||||
|
}
|
||||||
|
|
||||||
|
func vmCreateStage(ctx context.Context, stage, detail string) {
|
||||||
|
if op := vmCreateProgressFromContext(ctx); op != nil {
|
||||||
|
op.stage(stage, detail)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func vmCreateBindVM(ctx context.Context, vm model.VMRecord) {
|
||||||
|
if op := vmCreateProgressFromContext(ctx); op != nil {
|
||||||
|
op.bindVM(vm)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (op *vmCreateOperationState) setCancel(cancel context.CancelFunc) {
|
||||||
|
op.mu.Lock()
|
||||||
|
defer op.mu.Unlock()
|
||||||
|
op.cancel = cancel
|
||||||
|
}
|
||||||
|
|
||||||
|
func (op *vmCreateOperationState) bindVM(vm model.VMRecord) {
|
||||||
|
op.mu.Lock()
|
||||||
|
defer op.mu.Unlock()
|
||||||
|
op.op.VMID = vm.ID
|
||||||
|
op.op.VMName = vm.Name
|
||||||
|
}
|
||||||
|
|
||||||
|
func (op *vmCreateOperationState) stage(stage, detail string) {
|
||||||
|
op.mu.Lock()
|
||||||
|
defer op.mu.Unlock()
|
||||||
|
stage = strings.TrimSpace(stage)
|
||||||
|
detail = strings.TrimSpace(detail)
|
||||||
|
if stage == "" {
|
||||||
|
stage = op.op.Stage
|
||||||
|
}
|
||||||
|
if stage == op.op.Stage && detail == op.op.Detail {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
op.op.Stage = stage
|
||||||
|
op.op.Detail = detail
|
||||||
|
op.op.UpdatedAt = model.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (op *vmCreateOperationState) done(vm model.VMRecord) {
|
||||||
|
op.mu.Lock()
|
||||||
|
defer op.mu.Unlock()
|
||||||
|
vmCopy := vm
|
||||||
|
op.op.VMID = vm.ID
|
||||||
|
op.op.VMName = vm.Name
|
||||||
|
op.op.Stage = "ready"
|
||||||
|
op.op.Detail = "vm is ready"
|
||||||
|
op.op.Done = true
|
||||||
|
op.op.Success = true
|
||||||
|
op.op.Error = ""
|
||||||
|
op.op.VM = &vmCopy
|
||||||
|
op.op.UpdatedAt = model.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (op *vmCreateOperationState) fail(err error) {
|
||||||
|
op.mu.Lock()
|
||||||
|
defer op.mu.Unlock()
|
||||||
|
op.op.Done = true
|
||||||
|
op.op.Success = false
|
||||||
|
if err != nil {
|
||||||
|
op.op.Error = err.Error()
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(op.op.Detail) == "" {
|
||||||
|
op.op.Detail = "vm create failed"
|
||||||
|
}
|
||||||
|
op.op.UpdatedAt = model.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (op *vmCreateOperationState) snapshot() api.VMCreateOperation {
|
||||||
|
op.mu.Lock()
|
||||||
|
defer op.mu.Unlock()
|
||||||
|
snapshot := op.op
|
||||||
|
if snapshot.VM != nil {
|
||||||
|
vmCopy := *snapshot.VM
|
||||||
|
snapshot.VM = &vmCopy
|
||||||
|
}
|
||||||
|
return snapshot
|
||||||
|
}
|
||||||
|
|
||||||
|
func (op *vmCreateOperationState) cancelOperation() {
|
||||||
|
op.mu.Lock()
|
||||||
|
cancel := op.cancel
|
||||||
|
op.mu.Unlock()
|
||||||
|
if cancel != nil {
|
||||||
|
cancel()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Daemon) BeginVMCreate(_ context.Context, params api.VMCreateParams) (api.VMCreateOperation, error) {
|
||||||
|
op, err := newVMCreateOperationState()
|
||||||
|
if err != nil {
|
||||||
|
return api.VMCreateOperation{}, err
|
||||||
|
}
|
||||||
|
createCtx, cancel := context.WithCancel(context.Background())
|
||||||
|
op.setCancel(cancel)
|
||||||
|
|
||||||
|
d.createOpsMu.Lock()
|
||||||
|
if d.createOps == nil {
|
||||||
|
d.createOps = map[string]*vmCreateOperationState{}
|
||||||
|
}
|
||||||
|
d.createOps[op.op.ID] = op
|
||||||
|
d.createOpsMu.Unlock()
|
||||||
|
|
||||||
|
go d.runVMCreateOperation(withVMCreateProgress(createCtx, op), op, params)
|
||||||
|
return op.snapshot(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Daemon) runVMCreateOperation(ctx context.Context, op *vmCreateOperationState, params api.VMCreateParams) {
|
||||||
|
vm, err := d.CreateVM(ctx, params)
|
||||||
|
if err != nil {
|
||||||
|
op.fail(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
op.done(vm)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Daemon) VMCreateStatus(_ context.Context, id string) (api.VMCreateOperation, error) {
|
||||||
|
d.createOpsMu.Lock()
|
||||||
|
op, ok := d.createOps[strings.TrimSpace(id)]
|
||||||
|
d.createOpsMu.Unlock()
|
||||||
|
if !ok {
|
||||||
|
return api.VMCreateOperation{}, fmt.Errorf("vm create operation not found: %s", id)
|
||||||
|
}
|
||||||
|
return op.snapshot(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Daemon) CancelVMCreate(_ context.Context, id string) error {
|
||||||
|
d.createOpsMu.Lock()
|
||||||
|
op, ok := d.createOps[strings.TrimSpace(id)]
|
||||||
|
d.createOpsMu.Unlock()
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("vm create operation not found: %s", id)
|
||||||
|
}
|
||||||
|
op.cancelOperation()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Daemon) pruneVMCreateOperations(olderThan time.Time) {
|
||||||
|
d.createOpsMu.Lock()
|
||||||
|
defer d.createOpsMu.Unlock()
|
||||||
|
for id, op := range d.createOps {
|
||||||
|
snapshot := op.snapshot()
|
||||||
|
if !snapshot.Done {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if snapshot.UpdatedAt.Before(olderThan) {
|
||||||
|
delete(d.createOps, id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -716,7 +716,7 @@ func TestEnsureAuthorizedKeyOnWorkDiskRepairsNestedRootLayout(t *testing.T) {
|
||||||
vm := testVM("seed-repair", "image-seed-repair", "172.16.0.61")
|
vm := testVM("seed-repair", "image-seed-repair", "172.16.0.61")
|
||||||
vm.Runtime.WorkDiskPath = workDiskDir
|
vm.Runtime.WorkDiskPath = workDiskDir
|
||||||
|
|
||||||
if err := d.ensureAuthorizedKeyOnWorkDisk(context.Background(), &vm); err != nil {
|
if err := d.ensureAuthorizedKeyOnWorkDisk(context.Background(), &vm, model.Image{}, workDiskPreparation{}); err != nil {
|
||||||
t.Fatalf("ensureAuthorizedKeyOnWorkDisk: %v", err)
|
t.Fatalf("ensureAuthorizedKeyOnWorkDisk: %v", err)
|
||||||
}
|
}
|
||||||
if _, err := os.Stat(filepath.Join(workDiskDir, "root")); !os.IsNotExist(err) {
|
if _, err := os.Stat(filepath.Join(workDiskDir, "root")); !os.IsNotExist(err) {
|
||||||
|
|
@ -748,6 +748,61 @@ func TestCreateVMRejectsNonPositiveCPUAndMemory(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestBeginVMCreateCompletesAndReturnsStatus(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
db := openDaemonStore(t)
|
||||||
|
image := testImage("default")
|
||||||
|
image.ID = "default-image-id"
|
||||||
|
image.Name = "default"
|
||||||
|
if err := db.UpsertImage(ctx, image); err != nil {
|
||||||
|
t.Fatalf("UpsertImage: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d := &Daemon{
|
||||||
|
store: db,
|
||||||
|
layout: paths.Layout{
|
||||||
|
VMsDir: t.TempDir(),
|
||||||
|
},
|
||||||
|
config: model.DaemonConfig{
|
||||||
|
DefaultImageName: image.Name,
|
||||||
|
BridgeIP: model.DefaultBridgeIP,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
op, err := d.BeginVMCreate(ctx, api.VMCreateParams{Name: "queued", NoStart: true})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("BeginVMCreate: %v", err)
|
||||||
|
}
|
||||||
|
if op.ID == "" {
|
||||||
|
t.Fatal("operation id should be populated")
|
||||||
|
}
|
||||||
|
|
||||||
|
deadline := time.Now().Add(2 * time.Second)
|
||||||
|
for time.Now().Before(deadline) {
|
||||||
|
status, err := d.VMCreateStatus(ctx, op.ID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("VMCreateStatus: %v", err)
|
||||||
|
}
|
||||||
|
if !status.Done {
|
||||||
|
time.Sleep(10 * time.Millisecond)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !status.Success {
|
||||||
|
t.Fatalf("status = %+v, want success", status)
|
||||||
|
}
|
||||||
|
if status.VM == nil || status.VM.Name != "queued" {
|
||||||
|
t.Fatalf("status VM = %+v, want queued vm", status.VM)
|
||||||
|
}
|
||||||
|
if status.VM.State != model.VMStateStopped {
|
||||||
|
t.Fatalf("status VM state = %s, want stopped", status.VM.State)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.Fatal("vm create operation did not finish before timeout")
|
||||||
|
}
|
||||||
|
|
||||||
func TestCreateVMUsesDefaultsWhenCPUAndMemoryOmitted(t *testing.T) {
|
func TestCreateVMUsesDefaultsWhenCPUAndMemoryOmitted(t *testing.T) {
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
db := openDaemonStore(t)
|
db := openDaemonStore(t)
|
||||||
|
|
|
||||||
|
|
@ -4,6 +4,8 @@ import (
|
||||||
"archive/tar"
|
"archive/tar"
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
|
|
@ -137,6 +139,15 @@ func AuthorizedPublicKey(path string) ([]byte, error) {
|
||||||
return ssh.MarshalAuthorizedKey(signer.PublicKey()), nil
|
return ssh.MarshalAuthorizedKey(signer.PublicKey()), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func AuthorizedPublicKeyFingerprint(path string) (string, error) {
|
||||||
|
key, err := AuthorizedPublicKey(path)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
sum := sha256.Sum256([]byte(strings.TrimSpace(string(key))))
|
||||||
|
return hex.EncodeToString(sum[:]), nil
|
||||||
|
}
|
||||||
|
|
||||||
func shellQuote(value string) string {
|
func shellQuote(value string) string {
|
||||||
return "'" + strings.ReplaceAll(value, "'", `'"'"'`) + "'"
|
return "'" + strings.ReplaceAll(value, "'", `'"'"'`) + "'"
|
||||||
}
|
}
|
||||||
|
|
|
||||||
132
internal/guestnet/assets/bootstrap.sh
Normal file
132
internal/guestnet/assets/bootstrap.sh
Normal file
|
|
@ -0,0 +1,132 @@
|
||||||
|
#!/bin/sh
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||||
|
|
||||||
|
if ! command -v ip >/dev/null 2>&1; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
cmdline="$(cat /proc/cmdline 2>/dev/null || true)"
|
||||||
|
ip_arg=""
|
||||||
|
for arg in $cmdline; do
|
||||||
|
case "$arg" in
|
||||||
|
ip=*)
|
||||||
|
ip_arg="${arg#ip=}"
|
||||||
|
break
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -z "$ip_arg" ]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
field() {
|
||||||
|
printf '%s' "$ip_arg" | cut -d: -f"$1"
|
||||||
|
}
|
||||||
|
|
||||||
|
mask_to_prefix() {
|
||||||
|
case "$1" in
|
||||||
|
[0-9]|[1-2][0-9]|3[0-2])
|
||||||
|
printf '%s\n' "$1"
|
||||||
|
return 0
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
prefix=0
|
||||||
|
old_ifs=$IFS
|
||||||
|
IFS=.
|
||||||
|
set -- $1
|
||||||
|
IFS=$old_ifs
|
||||||
|
if [ "$#" -ne 4 ]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
for octet in "$@"; do
|
||||||
|
case "$octet" in
|
||||||
|
255) prefix=$((prefix + 8)) ;;
|
||||||
|
254) prefix=$((prefix + 7)) ;;
|
||||||
|
252) prefix=$((prefix + 6)) ;;
|
||||||
|
248) prefix=$((prefix + 5)) ;;
|
||||||
|
240) prefix=$((prefix + 4)) ;;
|
||||||
|
224) prefix=$((prefix + 3)) ;;
|
||||||
|
192) prefix=$((prefix + 2)) ;;
|
||||||
|
128) prefix=$((prefix + 1)) ;;
|
||||||
|
0) ;;
|
||||||
|
*) return 1 ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
printf '%s\n' "$prefix"
|
||||||
|
}
|
||||||
|
|
||||||
|
find_iface() {
|
||||||
|
hint="$1"
|
||||||
|
if [ -n "$hint" ] && [ -d "/sys/class/net/$hint" ]; then
|
||||||
|
printf '%s\n' "$hint"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
for path in /sys/class/net/*; do
|
||||||
|
[ -e "$path" ] || continue
|
||||||
|
iface="${path##*/}"
|
||||||
|
if [ "$iface" = "lo" ]; then
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
printf '%s\n' "$iface"
|
||||||
|
return 0
|
||||||
|
done
|
||||||
|
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
guest_ip="$(field 1)"
|
||||||
|
gateway_ip="$(field 3)"
|
||||||
|
netmask="$(field 4)"
|
||||||
|
iface_hint="$(field 6)"
|
||||||
|
dns1="$(field 8)"
|
||||||
|
dns2="$(field 9)"
|
||||||
|
|
||||||
|
if [ -z "$guest_ip" ]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
iface=""
|
||||||
|
attempt=0
|
||||||
|
while [ "$attempt" -lt 50 ]; do
|
||||||
|
iface="$(find_iface "$iface_hint" || true)"
|
||||||
|
if [ -n "$iface" ]; then
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
attempt=$((attempt + 1))
|
||||||
|
sleep 0.2
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -z "$iface" ]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
prefix="$(mask_to_prefix "$netmask" || printf '24\n')"
|
||||||
|
|
||||||
|
ip link set "$iface" up
|
||||||
|
ip addr replace "$guest_ip/$prefix" dev "$iface"
|
||||||
|
|
||||||
|
if [ -n "$gateway_ip" ]; then
|
||||||
|
ip route replace default via "$gateway_ip" dev "$iface"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$dns1" ] || [ -n "$dns2" ]; then
|
||||||
|
tmp_resolv="/tmp/.banger-resolv.conf.$$"
|
||||||
|
: > "$tmp_resolv"
|
||||||
|
if [ -n "$dns1" ]; then
|
||||||
|
printf 'nameserver %s\n' "$dns1" >> "$tmp_resolv"
|
||||||
|
fi
|
||||||
|
if [ -n "$dns2" ]; then
|
||||||
|
printf 'nameserver %s\n' "$dns2" >> "$tmp_resolv"
|
||||||
|
fi
|
||||||
|
if [ -s "$tmp_resolv" ]; then
|
||||||
|
cat "$tmp_resolv" > /etc/resolv.conf
|
||||||
|
fi
|
||||||
|
rm -f "$tmp_resolv"
|
||||||
|
fi
|
||||||
13
internal/guestnet/assets/systemd.service
Normal file
13
internal/guestnet/assets/systemd.service
Normal file
|
|
@ -0,0 +1,13 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Banger guest network bootstrap
|
||||||
|
After=local-fs.target
|
||||||
|
Before=network.target network-online.target
|
||||||
|
ConditionPathExists=/proc/cmdline
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
ExecStart=/usr/local/libexec/banger-network-bootstrap
|
||||||
|
RemainAfterExit=yes
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
4
internal/guestnet/assets/void-core-service.sh
Normal file
4
internal/guestnet/assets/void-core-service.sh
Normal file
|
|
@ -0,0 +1,4 @@
|
||||||
|
#!/bin/sh
|
||||||
|
if [ -x /usr/local/libexec/banger-network-bootstrap ]; then
|
||||||
|
/usr/local/libexec/banger-network-bootstrap
|
||||||
|
fi
|
||||||
30
internal/guestnet/guestnet.go
Normal file
30
internal/guestnet/guestnet.go
Normal file
|
|
@ -0,0 +1,30 @@
|
||||||
|
package guestnet
|
||||||
|
|
||||||
|
import _ "embed"
|
||||||
|
|
||||||
|
const (
|
||||||
|
GuestScriptPath = "/usr/local/libexec/banger-network-bootstrap"
|
||||||
|
SystemdServiceName = "banger-network.service"
|
||||||
|
VoidCoreServicePath = "/etc/runit/core-services/20-banger-network.sh"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
//go:embed assets/bootstrap.sh
|
||||||
|
bootstrapScript string
|
||||||
|
//go:embed assets/systemd.service
|
||||||
|
systemdService string
|
||||||
|
//go:embed assets/void-core-service.sh
|
||||||
|
voidCoreService string
|
||||||
|
)
|
||||||
|
|
||||||
|
func BootstrapScript() string {
|
||||||
|
return bootstrapScript
|
||||||
|
}
|
||||||
|
|
||||||
|
func SystemdServiceUnit() string {
|
||||||
|
return systemdService
|
||||||
|
}
|
||||||
|
|
||||||
|
func VoidCoreService() string {
|
||||||
|
return voidCoreService
|
||||||
|
}
|
||||||
|
|
@ -61,20 +61,21 @@ type DaemonConfig struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
type Image struct {
|
type Image struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Managed bool `json:"managed"`
|
Managed bool `json:"managed"`
|
||||||
ArtifactDir string `json:"artifact_dir,omitempty"`
|
ArtifactDir string `json:"artifact_dir,omitempty"`
|
||||||
RootfsPath string `json:"rootfs_path"`
|
RootfsPath string `json:"rootfs_path"`
|
||||||
WorkSeedPath string `json:"work_seed_path,omitempty"`
|
WorkSeedPath string `json:"work_seed_path,omitempty"`
|
||||||
KernelPath string `json:"kernel_path"`
|
KernelPath string `json:"kernel_path"`
|
||||||
InitrdPath string `json:"initrd_path,omitempty"`
|
InitrdPath string `json:"initrd_path,omitempty"`
|
||||||
ModulesDir string `json:"modules_dir,omitempty"`
|
ModulesDir string `json:"modules_dir,omitempty"`
|
||||||
PackagesPath string `json:"packages_path,omitempty"`
|
PackagesPath string `json:"packages_path,omitempty"`
|
||||||
BuildSize string `json:"build_size,omitempty"`
|
BuildSize string `json:"build_size,omitempty"`
|
||||||
Docker bool `json:"docker"`
|
SeededSSHPublicKeyFingerprint string `json:"seeded_ssh_public_key_fingerprint,omitempty"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
Docker bool `json:"docker"`
|
||||||
UpdatedAt time.Time `json:"updated_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type VMSpec struct {
|
type VMSpec struct {
|
||||||
|
|
|
||||||
104
internal/opencode/opencode.go
Normal file
104
internal/opencode/opencode.go
Normal file
|
|
@ -0,0 +1,104 @@
|
||||||
|
package opencode
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"banger/internal/vsockagent"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
Port = 4096
|
||||||
|
Host = "0.0.0.0"
|
||||||
|
GuestBinaryPath = "/usr/local/bin/opencode"
|
||||||
|
ShimPath = "/root/.local/share/mise/shims/opencode"
|
||||||
|
ServiceName = "banger-opencode.service"
|
||||||
|
RunitServiceName = "banger-opencode"
|
||||||
|
ReadyTimeout = 15 * time.Second
|
||||||
|
pollInterval = 200 * time.Millisecond
|
||||||
|
)
|
||||||
|
|
||||||
|
func ServiceUnit() string {
|
||||||
|
return fmt.Sprintf(`[Unit]
|
||||||
|
Description=Banger opencode server
|
||||||
|
After=network.target
|
||||||
|
RequiresMountsFor=/root
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
Environment=HOME=/root
|
||||||
|
WorkingDirectory=/root
|
||||||
|
ExecStart=%s serve --hostname %s --port %d
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=1
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
`, GuestBinaryPath, Host, Port)
|
||||||
|
}
|
||||||
|
|
||||||
|
func RunitRunScript() string {
|
||||||
|
return fmt.Sprintf(`#!/bin/sh
|
||||||
|
set -e
|
||||||
|
export HOME=/root
|
||||||
|
cd /root
|
||||||
|
exec %s serve --hostname %s --port %d
|
||||||
|
`, GuestBinaryPath, Host, Port)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Ready(listeners []vsockagent.PortListener) bool {
|
||||||
|
for _, listener := range listeners {
|
||||||
|
if strings.ToLower(strings.TrimSpace(listener.Proto)) != "tcp" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if listener.Port == Port {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func WaitReady(ctx context.Context, logger *slog.Logger, socketPath string, report func(stage, detail string)) error {
|
||||||
|
return waitReady(ctx, logger, socketPath, ReadyTimeout, report)
|
||||||
|
}
|
||||||
|
|
||||||
|
func waitReady(ctx context.Context, logger *slog.Logger, socketPath string, timeout time.Duration, report func(stage, detail string)) error {
|
||||||
|
waitCtx, cancel := context.WithTimeout(ctx, timeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
ticker := time.NewTicker(pollInterval)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
var lastErr error
|
||||||
|
for {
|
||||||
|
portsCtx, portsCancel := context.WithTimeout(waitCtx, 3*time.Second)
|
||||||
|
listeners, err := vsockagent.Ports(portsCtx, logger, socketPath)
|
||||||
|
portsCancel()
|
||||||
|
if err == nil {
|
||||||
|
if Ready(listeners) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if report != nil {
|
||||||
|
report("wait_opencode", fmt.Sprintf("waiting for opencode on guest port %d", Port))
|
||||||
|
}
|
||||||
|
lastErr = fmt.Errorf("guest port %d is not listening yet", Port)
|
||||||
|
} else {
|
||||||
|
if report != nil {
|
||||||
|
report("wait_vsock_agent", "waiting for guest vsock agent")
|
||||||
|
}
|
||||||
|
lastErr = err
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-waitCtx.Done():
|
||||||
|
if lastErr != nil {
|
||||||
|
return fmt.Errorf("opencode server did not become ready on guest port %d: %w", Port, lastErr)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("opencode server did not become ready on guest port %d before timeout", Port)
|
||||||
|
case <-ticker.C:
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
116
internal/opencode/opencode_test.go
Normal file
116
internal/opencode/opencode_test.go
Normal file
|
|
@ -0,0 +1,116 @@
|
||||||
|
package opencode
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"banger/internal/vsockagent"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestServiceUnitContainsExpectedExecStart(t *testing.T) {
|
||||||
|
unit := ServiceUnit()
|
||||||
|
for _, snippet := range []string{
|
||||||
|
"RequiresMountsFor=/root",
|
||||||
|
"WorkingDirectory=/root",
|
||||||
|
"Environment=HOME=/root",
|
||||||
|
"ExecStart=/usr/local/bin/opencode serve --hostname 0.0.0.0 --port 4096",
|
||||||
|
"WantedBy=multi-user.target",
|
||||||
|
} {
|
||||||
|
if !strings.Contains(unit, snippet) {
|
||||||
|
t.Fatalf("service unit missing snippet %q\nunit:\n%s", snippet, unit)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRunitRunScriptContainsExpectedExec(t *testing.T) {
|
||||||
|
script := RunitRunScript()
|
||||||
|
for _, snippet := range []string{
|
||||||
|
"export HOME=/root",
|
||||||
|
"cd /root",
|
||||||
|
"exec /usr/local/bin/opencode serve --hostname 0.0.0.0 --port 4096",
|
||||||
|
} {
|
||||||
|
if !strings.Contains(script, snippet) {
|
||||||
|
t.Fatalf("runit script missing snippet %q\nscript:\n%s", snippet, script)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestReadyMatchesTCPPort(t *testing.T) {
|
||||||
|
if Ready([]vsockagent.PortListener{{Proto: "udp", Port: Port}}) {
|
||||||
|
t.Fatal("udp listener should not satisfy readiness")
|
||||||
|
}
|
||||||
|
if Ready([]vsockagent.PortListener{{Proto: "tcp", Port: 8080}}) {
|
||||||
|
t.Fatal("wrong tcp port should not satisfy readiness")
|
||||||
|
}
|
||||||
|
if !Ready([]vsockagent.PortListener{{Proto: "tcp", Port: Port}}) {
|
||||||
|
t.Fatal("tcp listener on opencode port should satisfy readiness")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWaitReadyReturnsWhenPortIsListening(t *testing.T) {
|
||||||
|
socketPath := filepath.Join(t.TempDir(), "opencode.vsock")
|
||||||
|
listener, err := net.Listen("unix", socketPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("listen: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() {
|
||||||
|
_ = listener.Close()
|
||||||
|
_ = os.Remove(socketPath)
|
||||||
|
})
|
||||||
|
|
||||||
|
serverDone := make(chan error, 1)
|
||||||
|
go func() {
|
||||||
|
conn, err := listener.Accept()
|
||||||
|
if err != nil {
|
||||||
|
serverDone <- err
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer conn.Close()
|
||||||
|
buf := make([]byte, 512)
|
||||||
|
n, err := conn.Read(buf)
|
||||||
|
if err != nil {
|
||||||
|
serverDone <- err
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if got := string(buf[:n]); got != "CONNECT 42070\n" {
|
||||||
|
serverDone <- fmt.Errorf("unexpected connect message %q", got)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if _, err := conn.Write([]byte("OK 1\n")); err != nil {
|
||||||
|
serverDone <- err
|
||||||
|
return
|
||||||
|
}
|
||||||
|
reqBuf := make([]byte, 0, 512)
|
||||||
|
for {
|
||||||
|
n, err = conn.Read(buf)
|
||||||
|
if err != nil {
|
||||||
|
serverDone <- err
|
||||||
|
return
|
||||||
|
}
|
||||||
|
reqBuf = append(reqBuf, buf[:n]...)
|
||||||
|
if strings.Contains(string(reqBuf), "\r\n\r\n") {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !strings.Contains(string(reqBuf), "GET /ports HTTP/1.1\r\n") {
|
||||||
|
serverDone <- fmt.Errorf("unexpected ports payload %q", string(reqBuf))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
body := []byte(`{"listeners":[{"proto":"tcp","bind_address":"0.0.0.0","port":4096}]}`)
|
||||||
|
_, err = conn.Write([]byte(fmt.Sprintf("HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nContent-Length: %d\r\n\r\n%s", len(body), body)))
|
||||||
|
serverDone <- err
|
||||||
|
}()
|
||||||
|
|
||||||
|
if err := waitReady(context.Background(), nil, socketPath, time.Second, nil); err != nil {
|
||||||
|
t.Fatalf("waitReady: %v", err)
|
||||||
|
}
|
||||||
|
if err := <-serverDone; err != nil {
|
||||||
|
t.Fatalf("server: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -80,6 +80,7 @@ func (s *Store) migrate() error {
|
||||||
modules_dir TEXT,
|
modules_dir TEXT,
|
||||||
packages_path TEXT,
|
packages_path TEXT,
|
||||||
build_size TEXT,
|
build_size TEXT,
|
||||||
|
seeded_ssh_public_key_fingerprint TEXT,
|
||||||
docker INTEGER NOT NULL DEFAULT 0,
|
docker INTEGER NOT NULL DEFAULT 0,
|
||||||
created_at TEXT NOT NULL,
|
created_at TEXT NOT NULL,
|
||||||
updated_at TEXT NOT NULL
|
updated_at TEXT NOT NULL
|
||||||
|
|
@ -107,6 +108,9 @@ func (s *Store) migrate() error {
|
||||||
if err := ensureColumnExists(s.db, "images", "work_seed_path", "TEXT"); err != nil {
|
if err := ensureColumnExists(s.db, "images", "work_seed_path", "TEXT"); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if err := ensureColumnExists(s.db, "images", "seeded_ssh_public_key_fingerprint", "TEXT"); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -116,8 +120,8 @@ func (s *Store) UpsertImage(ctx context.Context, image model.Image) error {
|
||||||
const query = `
|
const query = `
|
||||||
INSERT INTO images (
|
INSERT INTO images (
|
||||||
id, name, managed, artifact_dir, rootfs_path, work_seed_path, kernel_path, initrd_path,
|
id, name, managed, artifact_dir, rootfs_path, work_seed_path, kernel_path, initrd_path,
|
||||||
modules_dir, packages_path, build_size, docker, created_at, updated_at
|
modules_dir, packages_path, build_size, seeded_ssh_public_key_fingerprint, docker, created_at, updated_at
|
||||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||||
ON CONFLICT(id) DO UPDATE SET
|
ON CONFLICT(id) DO UPDATE SET
|
||||||
name=excluded.name,
|
name=excluded.name,
|
||||||
managed=excluded.managed,
|
managed=excluded.managed,
|
||||||
|
|
@ -129,6 +133,7 @@ func (s *Store) UpsertImage(ctx context.Context, image model.Image) error {
|
||||||
modules_dir=excluded.modules_dir,
|
modules_dir=excluded.modules_dir,
|
||||||
packages_path=excluded.packages_path,
|
packages_path=excluded.packages_path,
|
||||||
build_size=excluded.build_size,
|
build_size=excluded.build_size,
|
||||||
|
seeded_ssh_public_key_fingerprint=excluded.seeded_ssh_public_key_fingerprint,
|
||||||
docker=excluded.docker,
|
docker=excluded.docker,
|
||||||
updated_at=excluded.updated_at`
|
updated_at=excluded.updated_at`
|
||||||
_, err := s.db.ExecContext(ctx, query,
|
_, err := s.db.ExecContext(ctx, query,
|
||||||
|
|
@ -143,6 +148,7 @@ func (s *Store) UpsertImage(ctx context.Context, image model.Image) error {
|
||||||
image.ModulesDir,
|
image.ModulesDir,
|
||||||
image.PackagesPath,
|
image.PackagesPath,
|
||||||
image.BuildSize,
|
image.BuildSize,
|
||||||
|
image.SeededSSHPublicKeyFingerprint,
|
||||||
boolToInt(image.Docker),
|
boolToInt(image.Docker),
|
||||||
image.CreatedAt.Format(time.RFC3339),
|
image.CreatedAt.Format(time.RFC3339),
|
||||||
image.UpdatedAt.Format(time.RFC3339),
|
image.UpdatedAt.Format(time.RFC3339),
|
||||||
|
|
@ -151,15 +157,15 @@ func (s *Store) UpsertImage(ctx context.Context, image model.Image) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Store) GetImageByName(ctx context.Context, name string) (model.Image, error) {
|
func (s *Store) GetImageByName(ctx context.Context, name string) (model.Image, error) {
|
||||||
return s.getImage(ctx, "SELECT id, name, managed, artifact_dir, rootfs_path, work_seed_path, kernel_path, initrd_path, modules_dir, packages_path, build_size, docker, created_at, updated_at FROM images WHERE name = ?", name)
|
return s.getImage(ctx, "SELECT id, name, managed, artifact_dir, rootfs_path, work_seed_path, kernel_path, initrd_path, modules_dir, packages_path, build_size, seeded_ssh_public_key_fingerprint, docker, created_at, updated_at FROM images WHERE name = ?", name)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Store) GetImageByID(ctx context.Context, id string) (model.Image, error) {
|
func (s *Store) GetImageByID(ctx context.Context, id string) (model.Image, error) {
|
||||||
return s.getImage(ctx, "SELECT id, name, managed, artifact_dir, rootfs_path, work_seed_path, kernel_path, initrd_path, modules_dir, packages_path, build_size, docker, created_at, updated_at FROM images WHERE id = ?", id)
|
return s.getImage(ctx, "SELECT id, name, managed, artifact_dir, rootfs_path, work_seed_path, kernel_path, initrd_path, modules_dir, packages_path, build_size, seeded_ssh_public_key_fingerprint, docker, created_at, updated_at FROM images WHERE id = ?", id)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Store) ListImages(ctx context.Context) ([]model.Image, error) {
|
func (s *Store) ListImages(ctx context.Context) ([]model.Image, error) {
|
||||||
rows, err := s.db.QueryContext(ctx, "SELECT id, name, managed, artifact_dir, rootfs_path, work_seed_path, kernel_path, initrd_path, modules_dir, packages_path, build_size, docker, created_at, updated_at FROM images ORDER BY created_at ASC")
|
rows, err := s.db.QueryContext(ctx, "SELECT id, name, managed, artifact_dir, rootfs_path, work_seed_path, kernel_path, initrd_path, modules_dir, packages_path, build_size, seeded_ssh_public_key_fingerprint, docker, created_at, updated_at FROM images ORDER BY created_at ASC")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
@ -337,6 +343,7 @@ func scanImageRow(row scanner) (model.Image, error) {
|
||||||
var image model.Image
|
var image model.Image
|
||||||
var managed, docker int
|
var managed, docker int
|
||||||
var workSeedPath sql.NullString
|
var workSeedPath sql.NullString
|
||||||
|
var seededSSHPublicKeyFingerprint sql.NullString
|
||||||
var createdAt, updatedAt string
|
var createdAt, updatedAt string
|
||||||
err := row.Scan(
|
err := row.Scan(
|
||||||
&image.ID,
|
&image.ID,
|
||||||
|
|
@ -350,6 +357,7 @@ func scanImageRow(row scanner) (model.Image, error) {
|
||||||
&image.ModulesDir,
|
&image.ModulesDir,
|
||||||
&image.PackagesPath,
|
&image.PackagesPath,
|
||||||
&image.BuildSize,
|
&image.BuildSize,
|
||||||
|
&seededSSHPublicKeyFingerprint,
|
||||||
&docker,
|
&docker,
|
||||||
&createdAt,
|
&createdAt,
|
||||||
&updatedAt,
|
&updatedAt,
|
||||||
|
|
@ -360,6 +368,7 @@ func scanImageRow(row scanner) (model.Image, error) {
|
||||||
image.Managed = managed == 1
|
image.Managed = managed == 1
|
||||||
image.Docker = docker == 1
|
image.Docker = docker == 1
|
||||||
image.WorkSeedPath = workSeedPath.String
|
image.WorkSeedPath = workSeedPath.String
|
||||||
|
image.SeededSSHPublicKeyFingerprint = seededSSHPublicKeyFingerprint.String
|
||||||
image.CreatedAt, err = time.Parse(time.RFC3339, createdAt)
|
image.CreatedAt, err = time.Parse(time.RFC3339, createdAt)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return image, err
|
return image, err
|
||||||
|
|
|
||||||
|
|
@ -335,20 +335,21 @@ func openTestStore(t *testing.T) *Store {
|
||||||
func sampleImage(name string) model.Image {
|
func sampleImage(name string) model.Image {
|
||||||
now := fixedTime()
|
now := fixedTime()
|
||||||
return model.Image{
|
return model.Image{
|
||||||
ID: name + "-id",
|
ID: name + "-id",
|
||||||
Name: name,
|
Name: name,
|
||||||
Managed: true,
|
Managed: true,
|
||||||
ArtifactDir: "/artifacts/" + name,
|
ArtifactDir: "/artifacts/" + name,
|
||||||
RootfsPath: "/images/" + name + ".ext4",
|
RootfsPath: "/images/" + name + ".ext4",
|
||||||
WorkSeedPath: "/images/" + name + ".work-seed.ext4",
|
WorkSeedPath: "/images/" + name + ".work-seed.ext4",
|
||||||
KernelPath: "/kernels/" + name,
|
KernelPath: "/kernels/" + name,
|
||||||
InitrdPath: "/initrd/" + name,
|
InitrdPath: "/initrd/" + name,
|
||||||
ModulesDir: "/modules/" + name,
|
ModulesDir: "/modules/" + name,
|
||||||
PackagesPath: "/packages/" + name + ".apt",
|
PackagesPath: "/packages/" + name + ".apt",
|
||||||
BuildSize: "8G",
|
BuildSize: "8G",
|
||||||
Docker: true,
|
SeededSSHPublicKeyFingerprint: "seeded-fingerprint",
|
||||||
CreatedAt: now,
|
Docker: true,
|
||||||
UpdatedAt: now,
|
CreatedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -397,9 +397,10 @@ func UpdateFSTab(existing string) string {
|
||||||
|
|
||||||
func BuildBootArgs(vmName, guestIP, bridgeIP, dns string) string {
|
func BuildBootArgs(vmName, guestIP, bridgeIP, dns string) string {
|
||||||
return fmt.Sprintf(
|
return fmt.Sprintf(
|
||||||
"console=ttyS0 reboot=k panic=1 pci=off root=/dev/vda rw ip=%s::%s:255.255.255.0::eth0:off:%s hostname=%s systemd.mask=home.mount systemd.mask=var.mount",
|
"console=ttyS0 reboot=k panic=1 pci=off root=/dev/vda rw ip=%s::%s:255.255.255.0:%s:eth0:off:%s hostname=%s systemd.mask=home.mount systemd.mask=var.mount",
|
||||||
guestIP,
|
guestIP,
|
||||||
bridgeIP,
|
bridgeIP,
|
||||||
|
vmName,
|
||||||
dns,
|
dns,
|
||||||
vmName,
|
vmName,
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -167,6 +167,16 @@ func TestReadNormalizedLines(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestBuildBootArgsIncludesHostnameInIPField(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
got := BuildBootArgs("devbox", "172.16.0.2", "172.16.0.1", "1.1.1.1")
|
||||||
|
want := "console=ttyS0 reboot=k panic=1 pci=off root=/dev/vda rw ip=172.16.0.2::172.16.0.1:255.255.255.0:devbox:eth0:off:1.1.1.1 hostname=devbox systemd.mask=home.mount systemd.mask=var.mount"
|
||||||
|
if got != want {
|
||||||
|
t.Fatalf("BuildBootArgs() = %q, want %q", got, want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestWriteExt4FileRemovesTempFileAndReturnsCopyError(t *testing.T) {
|
func TestWriteExt4FileRemovesTempFileAndReturnsCopyError(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -18,8 +18,10 @@ Defaults:
|
||||||
--arch x86_64
|
--arch x86_64
|
||||||
--packages ./packages.void
|
--packages ./packages.void
|
||||||
|
|
||||||
This path is experimental and local-only. It reuses the current runtime bundle
|
This path is experimental and local-only. If ./runtime/void-kernel exists it
|
||||||
kernel/initrd/modules and does not change the default Debian image flow.
|
uses the staged Void kernel modules from that directory; otherwise it falls back
|
||||||
|
to the current runtime bundle modules. It does not change the default Debian
|
||||||
|
image flow.
|
||||||
EOF
|
EOF
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -85,6 +87,14 @@ bundle_path() {
|
||||||
printf '%s\n' "$fallback"
|
printf '%s\n' "$fallback"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
find_latest_module_dir() {
|
||||||
|
local root="$1"
|
||||||
|
if [[ ! -d "$root" ]]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
find "$root" -mindepth 1 -maxdepth 1 -type d | sort | tail -n 1
|
||||||
|
}
|
||||||
|
|
||||||
find_static_binary() {
|
find_static_binary() {
|
||||||
local name="$1"
|
local name="$1"
|
||||||
find "$STATIC_DIR" -type f \( -name "$name" -o -name "$name.static" \) -perm -u+x | sort | head -n 1
|
find "$STATIC_DIR" -type f \( -name "$name" -o -name "$name.static" \) -perm -u+x | sort | head -n 1
|
||||||
|
|
@ -94,6 +104,15 @@ find_static_keys_dir() {
|
||||||
find "$STATIC_DIR" -type d -path '*/var/db/xbps/keys' | sort | head -n 1
|
find "$STATIC_DIR" -type d -path '*/var/db/xbps/keys' | sort | head -n 1
|
||||||
}
|
}
|
||||||
|
|
||||||
|
install_root_authorized_key() {
|
||||||
|
local public_key
|
||||||
|
public_key="$(ssh-keygen -y -f "$SSH_KEY")"
|
||||||
|
sudo mkdir -p "$ROOT_MOUNT/root/.ssh"
|
||||||
|
printf '%s\n' "$public_key" | sudo tee "$ROOT_MOUNT/root/.ssh/authorized_keys" >/dev/null
|
||||||
|
sudo chmod 700 "$ROOT_MOUNT/root/.ssh"
|
||||||
|
sudo chmod 600 "$ROOT_MOUNT/root/.ssh/authorized_keys"
|
||||||
|
}
|
||||||
|
|
||||||
ensure_sshd_include() {
|
ensure_sshd_include() {
|
||||||
local cfg="$ROOT_MOUNT/etc/ssh/sshd_config"
|
local cfg="$ROOT_MOUNT/etc/ssh/sshd_config"
|
||||||
local tmp_cfg="$TMP_DIR/sshd_config"
|
local tmp_cfg="$TMP_DIR/sshd_config"
|
||||||
|
|
@ -137,6 +156,34 @@ EOF
|
||||||
sudo ln -snf /etc/sv/banger-vsock-agent "$ROOT_MOUNT/etc/runit/runsvdir/default/banger-vsock-agent"
|
sudo ln -snf /etc/sv/banger-vsock-agent "$ROOT_MOUNT/etc/runit/runsvdir/default/banger-vsock-agent"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
install_opencode_service() {
|
||||||
|
local service_dir="$ROOT_MOUNT/etc/sv/banger-opencode"
|
||||||
|
local run_path="$service_dir/run"
|
||||||
|
local finish_path="$service_dir/finish"
|
||||||
|
|
||||||
|
sudo mkdir -p "$service_dir"
|
||||||
|
cat <<'EOF' | sudo tee "$run_path" >/dev/null
|
||||||
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
|
export HOME=/root
|
||||||
|
cd /root
|
||||||
|
exec /usr/local/bin/opencode serve --hostname 0.0.0.0 --port 4096
|
||||||
|
EOF
|
||||||
|
cat <<'EOF' | sudo tee "$finish_path" >/dev/null
|
||||||
|
#!/bin/sh
|
||||||
|
exit 0
|
||||||
|
EOF
|
||||||
|
sudo chmod 0755 "$run_path" "$finish_path"
|
||||||
|
sudo mkdir -p "$ROOT_MOUNT/etc/runit/runsvdir/default"
|
||||||
|
sudo ln -snf /etc/sv/banger-opencode "$ROOT_MOUNT/etc/runit/runsvdir/default/banger-opencode"
|
||||||
|
}
|
||||||
|
|
||||||
|
install_guest_network_bootstrap() {
|
||||||
|
sudo mkdir -p "$ROOT_MOUNT/usr/local/libexec" "$ROOT_MOUNT/etc/runit/core-services"
|
||||||
|
sudo install -m 0755 "$GUESTNET_BOOTSTRAP_SCRIPT" "$ROOT_MOUNT/usr/local/libexec/banger-network-bootstrap"
|
||||||
|
sudo install -m 0644 "$GUESTNET_VOID_CORE_SERVICE" "$ROOT_MOUNT/etc/runit/core-services/20-banger-network.sh"
|
||||||
|
}
|
||||||
|
|
||||||
configure_docker_bootstrap() {
|
configure_docker_bootstrap() {
|
||||||
local modules_conf="$ROOT_MOUNT/etc/modules-load.d/docker-netfilter.conf"
|
local modules_conf="$ROOT_MOUNT/etc/modules-load.d/docker-netfilter.conf"
|
||||||
local sysctl_conf="$ROOT_MOUNT/etc/sysctl.d/99-docker.conf"
|
local sysctl_conf="$ROOT_MOUNT/etc/sysctl.d/99-docker.conf"
|
||||||
|
|
@ -346,6 +393,7 @@ if [[ ! -d "$RUNTIME_DIR" ]]; then
|
||||||
fi
|
fi
|
||||||
|
|
||||||
BUNDLE_METADATA="$RUNTIME_DIR/bundle.json"
|
BUNDLE_METADATA="$RUNTIME_DIR/bundle.json"
|
||||||
|
SSH_KEY="$(bundle_path ssh_key_path "$RUNTIME_DIR/id_ed25519")"
|
||||||
OUT_ROOTFS="$RUNTIME_DIR/rootfs-void.ext4"
|
OUT_ROOTFS="$RUNTIME_DIR/rootfs-void.ext4"
|
||||||
SIZE_SPEC="2G"
|
SIZE_SPEC="2G"
|
||||||
MIRROR="https://repo-default.voidlinux.org"
|
MIRROR="https://repo-default.voidlinux.org"
|
||||||
|
|
@ -353,11 +401,17 @@ ARCH="x86_64"
|
||||||
MISE_VERSION="v2025.12.0"
|
MISE_VERSION="v2025.12.0"
|
||||||
MISE_INSTALL_PATH="/usr/local/bin/mise"
|
MISE_INSTALL_PATH="/usr/local/bin/mise"
|
||||||
OPENCODE_TOOL="github:anomalyco/opencode"
|
OPENCODE_TOOL="github:anomalyco/opencode"
|
||||||
|
GUESTNET_BOOTSTRAP_SCRIPT="$SCRIPT_DIR/internal/guestnet/assets/bootstrap.sh"
|
||||||
|
GUESTNET_VOID_CORE_SERVICE="$SCRIPT_DIR/internal/guestnet/assets/void-core-service.sh"
|
||||||
MODULES_DIR="$(bundle_path default_modules_dir "$RUNTIME_DIR/wtf/root/lib/modules/6.8.0-94-generic")"
|
MODULES_DIR="$(bundle_path default_modules_dir "$RUNTIME_DIR/wtf/root/lib/modules/6.8.0-94-generic")"
|
||||||
|
VOID_KERNEL_MODULES_DIR="$(find_latest_module_dir "$RUNTIME_DIR/void-kernel/lib/modules" || true)"
|
||||||
VSOCK_AGENT="$(bundle_path vsock_agent_path "$RUNTIME_DIR/banger-vsock-agent")"
|
VSOCK_AGENT="$(bundle_path vsock_agent_path "$RUNTIME_DIR/banger-vsock-agent")"
|
||||||
if [[ "$VSOCK_AGENT" == "$RUNTIME_DIR/banger-vsock-agent" && ! -x "$VSOCK_AGENT" ]]; then
|
if [[ "$VSOCK_AGENT" == "$RUNTIME_DIR/banger-vsock-agent" && ! -x "$VSOCK_AGENT" ]]; then
|
||||||
VSOCK_AGENT="$(bundle_path vsock_ping_helper_path "$RUNTIME_DIR/banger-vsock-pingd")"
|
VSOCK_AGENT="$(bundle_path vsock_ping_helper_path "$RUNTIME_DIR/banger-vsock-pingd")"
|
||||||
fi
|
fi
|
||||||
|
if [[ -n "$VOID_KERNEL_MODULES_DIR" ]]; then
|
||||||
|
MODULES_DIR="$VOID_KERNEL_MODULES_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
while [[ $# -gt 0 ]]; do
|
||||||
case "$1" in
|
case "$1" in
|
||||||
|
|
@ -417,6 +471,14 @@ if [[ ! -x "$VSOCK_AGENT" ]]; then
|
||||||
log "run 'make build' or refresh the runtime bundle"
|
log "run 'make build' or refresh the runtime bundle"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
if [[ ! -f "$GUESTNET_BOOTSTRAP_SCRIPT" ]]; then
|
||||||
|
log "guest network bootstrap script not found: $GUESTNET_BOOTSTRAP_SCRIPT"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [[ ! -f "$GUESTNET_VOID_CORE_SERVICE" ]]; then
|
||||||
|
log "guest network core-service shim not found: $GUESTNET_VOID_CORE_SERVICE"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
if [[ -e "$OUT_ROOTFS" ]]; then
|
if [[ -e "$OUT_ROOTFS" ]]; then
|
||||||
log "output rootfs already exists: $OUT_ROOTFS"
|
log "output rootfs already exists: $OUT_ROOTFS"
|
||||||
exit 1
|
exit 1
|
||||||
|
|
@ -426,6 +488,7 @@ require_command curl
|
||||||
require_command tar
|
require_command tar
|
||||||
require_command sudo
|
require_command sudo
|
||||||
require_command mkfs.ext4
|
require_command mkfs.ext4
|
||||||
|
require_command ssh-keygen
|
||||||
require_command mount
|
require_command mount
|
||||||
require_command umount
|
require_command umount
|
||||||
require_command install
|
require_command install
|
||||||
|
|
@ -498,7 +561,11 @@ if [[ -n "$XBPS_QUERY" && -x "$XBPS_QUERY" ]]; then
|
||||||
sudo env XBPS_ARCH="$ARCH" "$XBPS_QUERY" -r "$ROOT_MOUNT" -l | awk '/^ii/ {print " " $2}' || true
|
sudo env XBPS_ARCH="$ARCH" "$XBPS_QUERY" -r "$ROOT_MOUNT" -l | awk '/^ii/ {print " " $2}' || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
log "copying bundled kernel modules into the guest"
|
if [[ -n "$VOID_KERNEL_MODULES_DIR" ]]; then
|
||||||
|
log "copying staged Void kernel modules into the guest"
|
||||||
|
else
|
||||||
|
log "copying bundled kernel modules into the guest"
|
||||||
|
fi
|
||||||
sudo mkdir -p "$ROOT_MOUNT/lib/modules"
|
sudo mkdir -p "$ROOT_MOUNT/lib/modules"
|
||||||
sudo cp -a "$MODULES_DIR" "$ROOT_MOUNT/lib/modules/"
|
sudo cp -a "$MODULES_DIR" "$ROOT_MOUNT/lib/modules/"
|
||||||
|
|
||||||
|
|
@ -507,6 +574,7 @@ sudo mkdir -p "$ROOT_MOUNT/usr/local/bin"
|
||||||
sudo install -m 0755 "$VSOCK_AGENT" "$ROOT_MOUNT/usr/local/bin/banger-vsock-agent"
|
sudo install -m 0755 "$VSOCK_AGENT" "$ROOT_MOUNT/usr/local/bin/banger-vsock-agent"
|
||||||
|
|
||||||
log "preparing SSH and runit services"
|
log "preparing SSH and runit services"
|
||||||
|
install_guest_network_bootstrap
|
||||||
ensure_sshd_include
|
ensure_sshd_include
|
||||||
enable_sshd_service
|
enable_sshd_service
|
||||||
install_vsock_service
|
install_vsock_service
|
||||||
|
|
@ -516,7 +584,8 @@ normalize_root_shell
|
||||||
configure_root_bash_prompt
|
configure_root_bash_prompt
|
||||||
log "installing mise and opencode"
|
log "installing mise and opencode"
|
||||||
install_mise_and_opencode
|
install_mise_and_opencode
|
||||||
sudo mkdir -p "$ROOT_MOUNT/root/.ssh"
|
install_opencode_service
|
||||||
|
install_root_authorized_key
|
||||||
sudo touch "$ROOT_MOUNT/etc/fstab" "$ROOT_MOUNT/etc/hostname"
|
sudo touch "$ROOT_MOUNT/etc/fstab" "$ROOT_MOUNT/etc/hostname"
|
||||||
sudo chroot "$ROOT_MOUNT" /usr/bin/ssh-keygen -A
|
sudo chroot "$ROOT_MOUNT" /usr/bin/ssh-keygen -A
|
||||||
|
|
||||||
|
|
|
||||||
391
make-void-kernel.sh
Executable file
391
make-void-kernel.sh
Executable file
|
|
@ -0,0 +1,391 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
log() {
|
||||||
|
printf '[make-void-kernel] %s\n' "$*"
|
||||||
|
}
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<'EOF'
|
||||||
|
Usage: ./make-void-kernel.sh [--out-dir <path>] [--mirror <url>] [--arch <arch>] [--kernel-package <name>] [--print-register-flags]
|
||||||
|
|
||||||
|
Download and stage a Void Linux kernel under ./runtime/void-kernel for the
|
||||||
|
experimental Void guest flow.
|
||||||
|
|
||||||
|
Defaults:
|
||||||
|
--out-dir ./runtime/void-kernel
|
||||||
|
--mirror https://repo-default.voidlinux.org
|
||||||
|
--arch x86_64
|
||||||
|
--kernel-package linux6.12
|
||||||
|
|
||||||
|
The staged output contains:
|
||||||
|
boot/vmlinux-<version> Firecracker-usable kernel extracted from vmlinuz
|
||||||
|
boot/vmlinuz-<version> Raw distro boot image from the Void package
|
||||||
|
boot/initramfs-<version>.img Matching initramfs generated with dracut
|
||||||
|
boot/config-<version> Void kernel config
|
||||||
|
lib/modules/<version>/ Matching kernel modules tree
|
||||||
|
|
||||||
|
If --print-register-flags is passed, the script does not download anything. It
|
||||||
|
prints the banger image register flags for an existing staged Void kernel.
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
require_command() {
|
||||||
|
local name="$1"
|
||||||
|
command -v "$name" >/dev/null 2>&1 || {
|
||||||
|
log "required command not found: $name"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
normalize_mirror() {
|
||||||
|
local mirror="${1%/}"
|
||||||
|
mirror="${mirror%/current}"
|
||||||
|
mirror="${mirror%/static}"
|
||||||
|
printf '%s\n' "$mirror"
|
||||||
|
}
|
||||||
|
|
||||||
|
find_static_binary() {
|
||||||
|
local name="$1"
|
||||||
|
find "$STATIC_DIR" -type f \( -name "$name" -o -name "$name.static" \) -perm -u+x | sort | head -n 1
|
||||||
|
}
|
||||||
|
|
||||||
|
find_static_keys_dir() {
|
||||||
|
find "$STATIC_DIR" -type d -path '*/var/db/xbps/keys' | sort | head -n 1
|
||||||
|
}
|
||||||
|
|
||||||
|
find_latest_matching() {
|
||||||
|
local dir="$1"
|
||||||
|
local pattern="$2"
|
||||||
|
if [[ ! -d "$dir" ]]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
find "$dir" -maxdepth 1 -type f -name "$pattern" | sort | tail -n 1
|
||||||
|
}
|
||||||
|
|
||||||
|
find_latest_module_dir() {
|
||||||
|
local root="$1"
|
||||||
|
if [[ ! -d "$root" ]]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
find "$root" -mindepth 1 -maxdepth 1 -type d | sort | tail -n 1
|
||||||
|
}
|
||||||
|
|
||||||
|
print_register_flags() {
|
||||||
|
local kernel=""
|
||||||
|
local initrd=""
|
||||||
|
local modules=""
|
||||||
|
|
||||||
|
kernel="$(find_latest_matching "$OUT_DIR/boot" 'vmlinux-*' || true)"
|
||||||
|
initrd="$(find_latest_matching "$OUT_DIR/boot" 'initramfs-*' || true)"
|
||||||
|
modules="$(find_latest_module_dir "$OUT_DIR/lib/modules" || true)"
|
||||||
|
|
||||||
|
if [[ -z "$kernel" || -z "$modules" ]]; then
|
||||||
|
log "staged Void kernel not found under $OUT_DIR"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
printf -- '--kernel %q ' "$kernel"
|
||||||
|
if [[ -n "$initrd" ]]; then
|
||||||
|
printf -- '--initrd %q ' "$initrd"
|
||||||
|
fi
|
||||||
|
printf -- '--modules %q\n' "$modules"
|
||||||
|
}
|
||||||
|
|
||||||
|
check_elf() {
|
||||||
|
local path="$1"
|
||||||
|
readelf -h "$path" >/dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
ensure_stage_root_layout() {
|
||||||
|
mkdir -p "$STAGE_ROOT/usr"
|
||||||
|
|
||||||
|
if [[ ! -e "$STAGE_ROOT/bin" ]]; then
|
||||||
|
ln -snf usr/bin "$STAGE_ROOT/bin"
|
||||||
|
fi
|
||||||
|
if [[ ! -e "$STAGE_ROOT/sbin" ]]; then
|
||||||
|
ln -snf usr/bin "$STAGE_ROOT/sbin"
|
||||||
|
fi
|
||||||
|
if [[ ! -e "$STAGE_ROOT/usr/sbin" ]]; then
|
||||||
|
ln -snf bin "$STAGE_ROOT/usr/sbin"
|
||||||
|
fi
|
||||||
|
if [[ ! -e "$STAGE_ROOT/lib" ]]; then
|
||||||
|
ln -snf usr/lib "$STAGE_ROOT/lib"
|
||||||
|
fi
|
||||||
|
if [[ ! -e "$STAGE_ROOT/lib64" ]]; then
|
||||||
|
ln -snf usr/lib "$STAGE_ROOT/lib64"
|
||||||
|
fi
|
||||||
|
if [[ ! -e "$STAGE_ROOT/usr/lib64" ]]; then
|
||||||
|
ln -snf lib "$STAGE_ROOT/usr/lib64"
|
||||||
|
fi
|
||||||
|
if [[ -x "$STAGE_ROOT/usr/bin/udevd" ]]; then
|
||||||
|
mkdir -p "$STAGE_ROOT/usr/lib/udev" "$STAGE_ROOT/usr/lib/systemd"
|
||||||
|
if [[ ! -e "$STAGE_ROOT/usr/lib/udev/udevd" ]]; then
|
||||||
|
ln -snf ../../bin/udevd "$STAGE_ROOT/usr/lib/udev/udevd"
|
||||||
|
fi
|
||||||
|
if [[ ! -e "$STAGE_ROOT/usr/lib/systemd/systemd-udevd" ]]; then
|
||||||
|
ln -snf ../../bin/udevd "$STAGE_ROOT/usr/lib/systemd/systemd-udevd"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
sync_host_dracut_tree() {
|
||||||
|
if [[ ! -d /usr/lib/dracut ]]; then
|
||||||
|
log "host dracut support files not found under /usr/lib/dracut"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
rm -rf "$STAGE_ROOT/usr/lib/dracut"
|
||||||
|
mkdir -p "$STAGE_ROOT/usr/lib"
|
||||||
|
cp -a /usr/lib/dracut "$STAGE_ROOT/usr/lib/dracut"
|
||||||
|
}
|
||||||
|
|
||||||
|
build_initramfs() {
|
||||||
|
local kver="$1"
|
||||||
|
local modules_dir="$2"
|
||||||
|
local out="$3"
|
||||||
|
local config_dir="$TMP_DIR/dracut.conf.d"
|
||||||
|
local tmpdir="$TMP_DIR/dracut-tmp"
|
||||||
|
local force_drivers="virtio virtio_ring virtio_mmio virtio_blk virtio_net virtio_console ext4 vsock vmw_vsock_virtio_transport"
|
||||||
|
|
||||||
|
mkdir -p "$config_dir" "$tmpdir"
|
||||||
|
ensure_stage_root_layout
|
||||||
|
sync_host_dracut_tree
|
||||||
|
|
||||||
|
log "generating initramfs for kernel $kver with host dracut against the staged Void sysroot"
|
||||||
|
env dracutbasedir="/usr/lib/dracut" dracut \
|
||||||
|
--force \
|
||||||
|
--kver "$kver" \
|
||||||
|
--sysroot "$STAGE_ROOT" \
|
||||||
|
--kmoddir "$modules_dir" \
|
||||||
|
--conf /dev/null \
|
||||||
|
--confdir "$config_dir" \
|
||||||
|
--tmpdir "$tmpdir" \
|
||||||
|
--no-hostonly \
|
||||||
|
--filesystems "ext4" \
|
||||||
|
--force-drivers "$force_drivers" \
|
||||||
|
--gzip \
|
||||||
|
"$out"
|
||||||
|
}
|
||||||
|
|
||||||
|
extract_vmlinux() {
|
||||||
|
local image="$1"
|
||||||
|
local out="$2"
|
||||||
|
local tmp="$TMP_DIR/vmlinux.extract"
|
||||||
|
|
||||||
|
if check_elf "$image"; then
|
||||||
|
install -m 0644 "$image" "$out"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
try_decompress() {
|
||||||
|
local header="$1"
|
||||||
|
local marker="$2"
|
||||||
|
local command="$3"
|
||||||
|
local pos=""
|
||||||
|
|
||||||
|
while IFS= read -r pos; do
|
||||||
|
[[ -n "$pos" ]] || continue
|
||||||
|
pos="${pos%%:*}"
|
||||||
|
tail -c+"$pos" "$image" | eval "$command" >"$tmp" 2>/dev/null || true
|
||||||
|
if check_elf "$tmp"; then
|
||||||
|
install -m 0644 "$tmp" "$out"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
done < <(tr "$header\n$marker" "\n$marker=" < "$image" | grep -abo "^$marker" || true)
|
||||||
|
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
try_decompress '\037\213\010' "xy" "gunzip" && return 0
|
||||||
|
try_decompress '\3757zXZ\000' "abcde" "unxz" && return 0
|
||||||
|
try_decompress "BZh" "xy" "bunzip2" && return 0
|
||||||
|
try_decompress '\135\000\000\000' "xxx" "unlzma" && return 0
|
||||||
|
try_decompress '\002!L\030' "xxx" "lz4 -d" && return 0
|
||||||
|
try_decompress '(\265/\375' "xxx" "unzstd" && return 0
|
||||||
|
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
resolve_kernel_package_file() {
|
||||||
|
local escaped_name=""
|
||||||
|
escaped_name="$(printf '%s\n' "$KERNEL_PACKAGE" | sed 's/[.[\*^$()+?{|]/\\&/g')"
|
||||||
|
|
||||||
|
curl -fsSL "$REPO_URL/" |
|
||||||
|
grep -o "${escaped_name}-[0-9][^\" >]*\\.${ARCH}\\.xbps" |
|
||||||
|
sort -u |
|
||||||
|
tail -n 1
|
||||||
|
}
|
||||||
|
|
||||||
|
cleanup() {
|
||||||
|
if [[ -n "${TMP_DIR:-}" && -d "${TMP_DIR:-}" ]]; then
|
||||||
|
rm -rf "$TMP_DIR"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
DEFAULT_RUNTIME_DIR="$SCRIPT_DIR"
|
||||||
|
if [[ -d "$SCRIPT_DIR/runtime" ]]; then
|
||||||
|
DEFAULT_RUNTIME_DIR="$SCRIPT_DIR/runtime"
|
||||||
|
fi
|
||||||
|
RUNTIME_DIR="${BANGER_RUNTIME_DIR:-$DEFAULT_RUNTIME_DIR}"
|
||||||
|
OUT_DIR="$RUNTIME_DIR/void-kernel"
|
||||||
|
MIRROR="https://repo-default.voidlinux.org"
|
||||||
|
ARCH="x86_64"
|
||||||
|
KERNEL_PACKAGE="linux6.12"
|
||||||
|
PRINT_REGISTER_FLAGS=0
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--out-dir)
|
||||||
|
OUT_DIR="${2:-}"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--mirror)
|
||||||
|
MIRROR="${2:-}"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--arch)
|
||||||
|
ARCH="${2:-}"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--kernel-package)
|
||||||
|
KERNEL_PACKAGE="${2:-}"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--print-register-flags)
|
||||||
|
PRINT_REGISTER_FLAGS=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
log "unknown option: $1"
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
MIRROR="$(normalize_mirror "$MIRROR")"
|
||||||
|
REPO_URL="$MIRROR/current"
|
||||||
|
STATIC_ARCHIVE_URL="$MIRROR/static/xbps-static-latest.x86_64-musl.tar.xz"
|
||||||
|
|
||||||
|
if [[ "$PRINT_REGISTER_FLAGS" == "1" ]]; then
|
||||||
|
print_register_flags
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$ARCH" != "x86_64" ]]; then
|
||||||
|
log "unsupported arch: $ARCH"
|
||||||
|
log "this experimental downloader currently supports only x86_64"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [[ ! -d "$RUNTIME_DIR" ]]; then
|
||||||
|
log "runtime bundle not found: $RUNTIME_DIR"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [[ -e "$OUT_DIR" ]]; then
|
||||||
|
log "output directory already exists: $OUT_DIR"
|
||||||
|
log "remove it first if you want to re-stage a different Void kernel"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
require_command curl
|
||||||
|
require_command tar
|
||||||
|
require_command cp
|
||||||
|
require_command find
|
||||||
|
require_command grep
|
||||||
|
require_command cut
|
||||||
|
require_command readelf
|
||||||
|
require_command file
|
||||||
|
require_command install
|
||||||
|
require_command tail
|
||||||
|
require_command xz
|
||||||
|
require_command gzip
|
||||||
|
require_command bzip2
|
||||||
|
require_command dracut
|
||||||
|
|
||||||
|
TMP_DIR="$(mktemp -d -t banger-void-kernel-XXXXXX)"
|
||||||
|
STATIC_DIR="$TMP_DIR/static"
|
||||||
|
STAGE_ROOT="$TMP_DIR/root"
|
||||||
|
STAGE_OUT="$TMP_DIR/out"
|
||||||
|
STATIC_ARCHIVE="$TMP_DIR/xbps-static.tar.xz"
|
||||||
|
trap cleanup EXIT
|
||||||
|
|
||||||
|
mkdir -p "$STATIC_DIR" "$STAGE_ROOT/var/db/xbps/keys" "$STAGE_OUT/boot" "$STAGE_OUT/lib/modules"
|
||||||
|
|
||||||
|
log "downloading static XBPS from $STATIC_ARCHIVE_URL"
|
||||||
|
curl -fsSL "$STATIC_ARCHIVE_URL" -o "$STATIC_ARCHIVE"
|
||||||
|
tar -xf "$STATIC_ARCHIVE" -C "$STATIC_DIR"
|
||||||
|
|
||||||
|
XBPS_INSTALL="$(find_static_binary xbps-install)"
|
||||||
|
STATIC_KEYS_DIR="$(find_static_keys_dir)"
|
||||||
|
if [[ -z "$XBPS_INSTALL" || ! -x "$XBPS_INSTALL" ]]; then
|
||||||
|
log "failed to locate xbps-install in the static archive"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [[ -z "$STATIC_KEYS_DIR" || ! -d "$STATIC_KEYS_DIR" ]]; then
|
||||||
|
log "failed to locate Void repository keys in the static archive"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cp -a "$STATIC_KEYS_DIR/." "$STAGE_ROOT/var/db/xbps/keys/"
|
||||||
|
|
||||||
|
KERNEL_PACKAGE_FILE="$(resolve_kernel_package_file)"
|
||||||
|
if [[ -z "$KERNEL_PACKAGE_FILE" ]]; then
|
||||||
|
log "failed to resolve a package file for $KERNEL_PACKAGE in $REPO_URL"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "staging $KERNEL_PACKAGE_FILE into a temporary root"
|
||||||
|
env XBPS_ARCH="$ARCH" "$XBPS_INSTALL" -S -y -U -r "$STAGE_ROOT" -R "$REPO_URL" linux-base "$KERNEL_PACKAGE" dracut eudev >/dev/null
|
||||||
|
|
||||||
|
VMLINUX_RAW="$(find_latest_matching "$STAGE_ROOT/boot" 'vmlinuz-*' || true)"
|
||||||
|
KERNEL_CONFIG="$(find_latest_matching "$STAGE_ROOT/boot" 'config-*' || true)"
|
||||||
|
MODULES_DIR="$(find_latest_module_dir "$STAGE_ROOT/usr/lib/modules" || true)"
|
||||||
|
KERNEL_VERSION="$(basename "$MODULES_DIR")"
|
||||||
|
INITRAMFS_NAME="initramfs-${KERNEL_VERSION}.img"
|
||||||
|
INITRAMFS_RAW="$STAGE_OUT/boot/$INITRAMFS_NAME"
|
||||||
|
|
||||||
|
if [[ -z "$VMLINUX_RAW" || -z "$KERNEL_CONFIG" || -z "$MODULES_DIR" ]]; then
|
||||||
|
log "staged Void kernel is missing expected boot artifacts"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [[ ! -x "$STAGE_ROOT/usr/bin/udevd" ]]; then
|
||||||
|
log "staged Void sysroot is missing /usr/bin/udevd after package install"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
VMLINUX_BASE="$(basename "$VMLINUX_RAW")"
|
||||||
|
VMLINUX_OUT="$STAGE_OUT/boot/vmlinux-${VMLINUX_BASE#vmlinuz-}"
|
||||||
|
install -m 0644 "$VMLINUX_RAW" "$STAGE_OUT/boot/$VMLINUX_BASE"
|
||||||
|
install -m 0644 "$KERNEL_CONFIG" "$STAGE_OUT/boot/$(basename "$KERNEL_CONFIG")"
|
||||||
|
build_initramfs "$KERNEL_VERSION" "$MODULES_DIR" "$INITRAMFS_RAW"
|
||||||
|
cp -a "$MODULES_DIR" "$STAGE_OUT/lib/modules/"
|
||||||
|
|
||||||
|
log "extracting Firecracker kernel from $(basename "$VMLINUX_RAW")"
|
||||||
|
if ! extract_vmlinux "$VMLINUX_RAW" "$VMLINUX_OUT"; then
|
||||||
|
log "failed to extract an uncompressed vmlinux from $VMLINUX_RAW"
|
||||||
|
log "raw kernel image type: $(file -b "$VMLINUX_RAW")"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cat >"$STAGE_OUT/metadata.json" <<EOF
|
||||||
|
{
|
||||||
|
"package": "$KERNEL_PACKAGE_FILE",
|
||||||
|
"kernel_path": "$OUT_DIR/boot/$(basename "$VMLINUX_OUT")",
|
||||||
|
"raw_kernel_path": "$OUT_DIR/boot/$VMLINUX_BASE",
|
||||||
|
"config_path": "$OUT_DIR/boot/$(basename "$KERNEL_CONFIG")",
|
||||||
|
"initrd_path": "$OUT_DIR/boot/$INITRAMFS_NAME",
|
||||||
|
"modules_dir": "$OUT_DIR/lib/modules/$(basename "$MODULES_DIR")"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
mv "$STAGE_OUT" "$OUT_DIR"
|
||||||
|
|
||||||
|
log "staged Void kernel artifacts in $OUT_DIR"
|
||||||
|
log "kernel image: $OUT_DIR/boot/$(basename "$VMLINUX_OUT")"
|
||||||
|
log "initrd image: $OUT_DIR/boot/$INITRAMFS_NAME"
|
||||||
|
log "modules dir: $OUT_DIR/lib/modules/$(basename "$MODULES_DIR")"
|
||||||
90
register-void-image.sh
Executable file
90
register-void-image.sh
Executable file
|
|
@ -0,0 +1,90 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
log() {
|
||||||
|
printf '[register-void-image] %s\n' "$*" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
find_latest_matching() {
|
||||||
|
local dir="$1"
|
||||||
|
local pattern="$2"
|
||||||
|
if [[ ! -d "$dir" ]]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
find "$dir" -maxdepth 1 -type f -name "$pattern" | sort | tail -n 1
|
||||||
|
}
|
||||||
|
|
||||||
|
find_latest_module_dir() {
|
||||||
|
local root="$1"
|
||||||
|
if [[ ! -d "$root" ]]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
find "$root" -mindepth 1 -maxdepth 1 -type d | sort | tail -n 1
|
||||||
|
}
|
||||||
|
|
||||||
|
resolve_banger_bin() {
|
||||||
|
if [[ -n "${BANGER_BIN:-}" ]]; then
|
||||||
|
printf '%s\n' "$BANGER_BIN"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
if [[ -x "$SCRIPT_DIR/banger" ]]; then
|
||||||
|
printf '%s\n' "$SCRIPT_DIR/banger"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
if command -v banger >/dev/null 2>&1; then
|
||||||
|
command -v banger
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
log "banger binary not found; build it first with 'make build' or set BANGER_BIN"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
DEFAULT_RUNTIME_DIR="$SCRIPT_DIR"
|
||||||
|
if [[ -d "$SCRIPT_DIR/runtime" ]]; then
|
||||||
|
DEFAULT_RUNTIME_DIR="$SCRIPT_DIR/runtime"
|
||||||
|
fi
|
||||||
|
|
||||||
|
RUNTIME_DIR="${BANGER_RUNTIME_DIR:-$DEFAULT_RUNTIME_DIR}"
|
||||||
|
IMAGE_NAME="${VOID_IMAGE_NAME:-void-exp}"
|
||||||
|
BANGER_BIN="$(resolve_banger_bin)"
|
||||||
|
ROOTFS="$RUNTIME_DIR/rootfs-void.ext4"
|
||||||
|
WORK_SEED="$RUNTIME_DIR/rootfs-void.work-seed.ext4"
|
||||||
|
PACKAGES="$SCRIPT_DIR/packages.void"
|
||||||
|
|
||||||
|
if [[ ! -f "$ROOTFS" ]]; then
|
||||||
|
log "missing Void rootfs: $ROOTFS"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [[ ! -f "$WORK_SEED" ]]; then
|
||||||
|
log "missing Void work-seed: $WORK_SEED"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
args=(
|
||||||
|
image register
|
||||||
|
--name "$IMAGE_NAME"
|
||||||
|
--rootfs "$ROOTFS"
|
||||||
|
--work-seed "$WORK_SEED"
|
||||||
|
--packages "$PACKAGES"
|
||||||
|
)
|
||||||
|
|
||||||
|
if [[ ! -d "$RUNTIME_DIR/void-kernel" ]]; then
|
||||||
|
log "missing staged Void kernel artifacts: $RUNTIME_DIR/void-kernel"
|
||||||
|
log "run 'make void-kernel' before registering $IMAGE_NAME"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
kernel="$(find_latest_matching "$RUNTIME_DIR/void-kernel/boot" 'vmlinux-*' || true)"
|
||||||
|
initrd="$(find_latest_matching "$RUNTIME_DIR/void-kernel/boot" 'initramfs-*' || true)"
|
||||||
|
modules="$(find_latest_module_dir "$RUNTIME_DIR/void-kernel/lib/modules" || true)"
|
||||||
|
|
||||||
|
if [[ -z "$kernel" || -z "$initrd" || -z "$modules" ]]; then
|
||||||
|
log "staged Void kernel is incomplete; expected vmlinux, initramfs, and modules under $RUNTIME_DIR/void-kernel"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "using staged Void kernel artifacts from $RUNTIME_DIR/void-kernel"
|
||||||
|
args+=(--kernel "$kernel" --initrd "$initrd" --modules "$modules")
|
||||||
|
|
||||||
|
"$BANGER_BIN" "${args[@]}"
|
||||||
34
verify.sh
34
verify.sh
|
|
@ -33,6 +33,7 @@ SSH_COMMON_ARGS=(
|
||||||
-o StrictHostKeyChecking=no
|
-o StrictHostKeyChecking=no
|
||||||
-o UserKnownHostsFile=/dev/null
|
-o UserKnownHostsFile=/dev/null
|
||||||
)
|
)
|
||||||
|
OPENCODE_PORT=4096
|
||||||
|
|
||||||
firecracker_running() {
|
firecracker_running() {
|
||||||
local pid="$1"
|
local pid="$1"
|
||||||
|
|
@ -68,6 +69,21 @@ wait_for_ssh() {
|
||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
|
wait_for_tcp() {
|
||||||
|
local host="$1"
|
||||||
|
local port="$2"
|
||||||
|
local deadline="$3"
|
||||||
|
|
||||||
|
while ((SECONDS < deadline)); do
|
||||||
|
if (exec 3<>/dev/tcp/"$host"/"$port") >/dev/null 2>&1; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
refresh_vm_metadata() {
|
refresh_vm_metadata() {
|
||||||
if ! VM_JSON="$(./banger vm show "$VM_NAME" 2>/dev/null)"; then
|
if ! VM_JSON="$(./banger vm show "$VM_NAME" 2>/dev/null)"; then
|
||||||
return 1
|
return 1
|
||||||
|
|
@ -240,9 +256,21 @@ if ! wait_for_ssh "$GUEST_IP" "$BOOT_DEADLINE"; then
|
||||||
fi
|
fi
|
||||||
ssh "${SSH_COMMON_ARGS[@]}" "root@${GUEST_IP}" "uname -a" >/dev/null
|
ssh "${SSH_COMMON_ARGS[@]}" "root@${GUEST_IP}" "uname -a" >/dev/null
|
||||||
|
|
||||||
if [[ "$IMAGE_NAME" == "void-exp" ]]; then
|
log "asserting opencode is available and listening in the guest"
|
||||||
log "asserting mise and opencode are available in the Void guest"
|
ssh "${SSH_COMMON_ARGS[@]}" "root@${GUEST_IP}" "command -v opencode >/dev/null 2>&1 && ss -H -lntp | awk '\$4 ~ /:${OPENCODE_PORT}\$/ { found = 1 } END { exit found ? 0 : 1 }'" >/dev/null
|
||||||
ssh "${SSH_COMMON_ARGS[@]}" "root@${GUEST_IP}" "command -v mise >/dev/null 2>&1 && command -v opencode >/dev/null 2>&1 && mise --version >/dev/null 2>&1 && opencode --version >/dev/null 2>&1" >/dev/null
|
|
||||||
|
log "asserting opencode server is reachable from the host"
|
||||||
|
if ! wait_for_tcp "$GUEST_IP" "$OPENCODE_PORT" "$BOOT_DEADLINE"; then
|
||||||
|
log "opencode server did not become reachable at ${GUEST_IP}:${OPENCODE_PORT}"
|
||||||
|
dump_diagnostics
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "asserting opencode port is reported by banger vm ports"
|
||||||
|
if ! ./banger vm ports "$VM_NAME" | grep -F ":${OPENCODE_PORT}" >/dev/null 2>&1; then
|
||||||
|
log "banger vm ports did not report ${OPENCODE_PORT}"
|
||||||
|
dump_diagnostics
|
||||||
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if (( NAT_ENABLED )); then
|
if (( NAT_ENABLED )); then
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue