This page covers deploying Rabbithole — the open-source Rust tool that dynamically generates entire websites on the fly using LLMs — to production on Fly.io. The live instance runs at isarabbithole.com. Source is at github.com/ajbt200128/rabbithole.
Rabbithole is deployed as a single Docker container on Fly.io. Page HTML is cached in a SQLite database so that each URL only needs to be generated once. To keep the cache consistent and durable across multiple Fly.io Machines (and across restarts), the deployment uses LiteFS — a FUSE-based distributed filesystem that transparently replicates SQLite databases across nodes.
Stack summary:
flyio/litefs:0.5 official image/litefs via FUSE; Rabbithole reads/writes its SQLite cache thererabbithole_data) mounted at /var/lib/litefs for persistence
The build uses a three-stage multi-stage Dockerfile.
cargo-chef is used to separate dependency compilation from application compilation, enabling Docker layer caching.
As long as Cargo.toml and Cargo.lock do not change, the dependency layer is reused on subsequent builds — dramatically reducing build times.
Three stages:
cargo chef prepare to produce recipe.json (a fingerprint of dependencies)cargo chef cook to pre-build all dependencies (cached layer), then compiles the app# syntax=docker/dockerfile:1
# ── Stage 1: planner ─────────────────────────────────────────────
FROM lukemathwalker/cargo-chef:latest-rust-1 AS planner
WORKDIR /app
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
# ── Stage 2: builder ─────────────────────────────────────────────
FROM lukemathwalker/cargo-chef:latest-rust-1 AS builder
WORKDIR /app
COPY --from=planner /app/recipe.json recipe.json
# This layer is cached as long as dependencies don't change
RUN cargo chef cook --release --recipe-path recipe.json
# Now copy source and build the real binary
COPY . .
RUN cargo build --release --bin rabbithole
# ── Stage 3: runtime ─────────────────────────────────────────────
FROM debian:bookworm-slim AS runtime
# Install runtime dependencies needed by LiteFS
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
ca-certificates \
fuse3 \
sqlite3 \
&& rm -rf /var/lib/apt/lists/*
# Copy LiteFS binary from the official image
COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/local/bin/litefs
# Copy the Rabbithole application binary
COPY --from=builder /app/target/release/rabbithole /usr/local/bin/rabbithole
# Copy LiteFS configuration
COPY etc/litefs.yml /etc/litefs.yml
WORKDIR /app
# LiteFS is the entrypoint; it mounts the FUSE filesystem, then
# starts the Rabbithole binary as a supervised subprocess.
ENTRYPOINT ["litefs", "mount"]
flyio/litefs:0.5. Always pin the LiteFS version to avoid unexpected changes. The fuse3 package is required for the FUSE mount to work.
The fly.toml file configures your Fly.io application. Key settings: the app name, the primary region (ord = Chicago), the internal HTTP port (8080 — which is the LiteFS proxy port), a health check, the volume mount for LiteFS data, and any environment variables.
app = "rabbithole"
primary_region = "ord"
[build]
dockerfile = "Dockerfile"
[http_service]
internal_port = 8080
force_https = true
auto_stop_machines = false
auto_start_machines = true
min_machines_running = 1
[http_service.concurrency]
type = "requests"
hard_limit = 25
soft_limit = 20
[[http_service.checks]]
grace_period = "10s"
interval = "30s"
method = "GET"
timeout = "5s"
path = "/"
[env]
RUST_LOG = "info"
DATABASE_URL = "sqlite:///litefs/rabbithole.db"
[[mounts]]
source = "rabbithole_data"
destination = "/var/lib/litefs"
auto_stop_machines = false to prevent Fly.io from shutting down machines automatically. LiteFS uses distributed leases and should not be combined with autostop/autostart, as a stale machine winning the lease after a restart could cause data loss.
LiteFS is configured via /etc/litefs.yml. This file controls the FUSE mount directory, the built-in HTTP proxy (which forwards port 8080 to the app on 8081, and handles replica consistency), the Consul-based primary lease, and the exec section that launches Rabbithole after LiteFS has connected to the cluster.
# /etc/litefs.yml
# The FUSE section configures the mount directory. Your application
# reads and writes SQLite databases from this directory.
fuse:
dir: "/litefs"
# LiteFS stores its internal transaction files and state here.
# This must be on a persistent volume (see fly.toml [[mounts]]).
data:
dir: "/var/lib/litefs"
# The built-in HTTP proxy handles:
# - Forwarding write requests from replicas to the primary node
# - Ensuring replicas are caught up before serving reads
# Fly.io routes external traffic to :8080; we forward it to the
# app which listens on :8081.
proxy:
addr: ":8080"
target: "localhost:8081"
db: "rabbithole.db"
passthrough-on-error: true
# LiteFS API server (internal, node-to-node communication only)
http:
addr: ":20202"
# Primary election via Consul. fly consul attach provisions a
# Consul cluster automatically for your Fly.io app.
lease:
type: "consul"
advertise-url: "http://${HOSTNAME}.vm.${FLY_APP_NAME}.internal:20202"
consul:
url: "${FLY_CONSUL_URL}"
key: "rabbithole/primary"
candidate: true
promote: true
# exec: commands run AFTER LiteFS has mounted and connected.
# LiteFS acts as a supervisor for the application process.
exec:
- cmd: "/usr/local/bin/rabbithole --port 8081"
fly consul attach to provision a Consul cluster for your app. This is required for the Consul-based lease to work. The FLY_CONSUL_URL environment variable is injected automatically by Fly.io after attaching.
If you need non-root access to the LiteFS mount, create etc/fuse.conf with:
user_allow_other
And copy it into your Docker image:
COPY etc/fuse.conf /etc/fuse.conf
Fly.io volumes provide persistent block storage. LiteFS stores its internal data (including LTX transaction files) here. Create a volume in the same region as your app:
fly volumes create rabbithole_data \
--size 1 \
--region ord
Options:
| Flag | Value | Description |
|---|---|---|
--size | 1 | Volume size in GB (1 GB is sufficient for most page caches) |
--region | ord | Fly.io region code (ord = Chicago O'Hare) |
If you scale to multiple machines, create one volume per machine. Each volume must be in the same region as its machine:
# Create a second volume for a second machine
fly volumes create rabbithole_data --size 1 --region ord
# List all volumes
fly volumes list
Secrets are stored encrypted by Fly.io and injected as environment variables at runtime. Never commit API keys to source control.
# Required: Anthropic API key for Claude (used to generate pages)
fly secrets set ANTHROPIC_API_KEY=sk-ant-api03-...
# Optional: cap total LLM spend (in USD). Requests are rejected
# once this cumulative cost is exceeded.
fly secrets set RABBITHOLE_MAX_COST=10.00
# After attaching Consul (required for LiteFS primary election):
fly consul attach
Verify secrets are set (values are never shown):
fly secrets list
| Secret | Required | Description |
|---|---|---|
ANTHROPIC_API_KEY | Yes | Anthropic Claude API key (sk-ant-...) |
RABBITHOLE_MAX_COST | No | Cumulative spend cap in USD (e.g. 10.00) |
FLY_CONSUL_URL | Yes (LiteFS) | Auto-injected by fly consul attach |
With everything configured, deploy with a single command:
fly deploy
This will:
Useful deploy flags:
# Use Fly's remote builder instead of building locally
fly deploy --remote-only
# Deploy without health check waiting (faster, less safe)
fly deploy --detach
# Watch live logs during/after deploy
fly logs
Check the status of your machines:
fly status
fly machine list
The repository includes two GitHub Actions workflows: one for continuous deployment on push to main, and one for running tests and lints on every push and pull request.
.github/workflows/deploy.yml)name: Deploy to Fly.io
on:
push:
branches:
- main
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: superfly/flyctl-actions/setup-flyctl@master
- name: Deploy to Fly.io
run: fly deploy --remote-only
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
FLY_API_TOKEN to your repository's GitHub Actions secrets. Generate a token with fly tokens create deploy.
.github/workflows/test.yml)name: Test
on:
push:
pull_request:
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust stable
uses: dtolnay/rust-toolchain@stable
with:
components: clippy, rustfmt
- name: Cache dependencies
uses: Swatinem/rust-cache@v2
- name: Run tests
run: cargo test --all-features
- name: Run Clippy
run: cargo clippy --all-targets --all-features -- -D warnings
- name: Check formatting
run: cargo fmt --all --check
On every git tag push (e.g. v0.2.0), a GitHub Actions workflow builds release binaries for four targets and uploads them as GitHub Release assets.
.github/workflows/release.yml)name: Release
on:
push:
tags:
- 'v*'
jobs:
build:
name: Build ${{ matrix.target }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- target: x86_64-unknown-linux-gnu
os: ubuntu-latest
asset_name: rabbithole-linux-amd64
- target: aarch64-unknown-linux-gnu
os: ubuntu-latest
asset_name: rabbithole-linux-arm64
- target: x86_64-apple-darwin
os: macos-latest
asset_name: rabbithole-macos-amd64
- target: aarch64-apple-darwin
os: macos-latest
asset_name: rabbithole-macos-arm64
steps:
- uses: actions/checkout@v4
- name: Install Rust stable
uses: dtolnay/rust-toolchain@stable
with:
targets: ${{ matrix.target }}
- name: Install cross (Linux ARM64 only)
if: matrix.target == 'aarch64-unknown-linux-gnu'
run: cargo install cross --locked
- name: Build (cross, Linux ARM64)
if: matrix.target == 'aarch64-unknown-linux-gnu'
run: cross build --release --target ${{ matrix.target }}
- name: Build (cargo, all others)
if: matrix.target != 'aarch64-unknown-linux-gnu'
run: cargo build --release --target ${{ matrix.target }}
- name: Rename binary
run: |
cp target/${{ matrix.target }}/release/rabbithole \
${{ matrix.asset_name }}
- name: Upload to GitHub Release
uses: softprops/action-gh-release@v2
with:
files: ${{ matrix.asset_name }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Produced assets per release:
| Asset name | Target triple | Platform |
|---|---|---|
rabbithole-linux-amd64 | x86_64-unknown-linux-gnu | Linux x86-64 |
rabbithole-linux-arm64 | aarch64-unknown-linux-gnu | Linux ARM64 |
rabbithole-macos-amd64 | x86_64-apple-darwin | macOS Intel |
rabbithole-macos-arm64 | aarch64-apple-darwin | macOS Apple Silicon |
To trigger a release, push a version tag:
git tag v0.2.0
git push origin v0.2.0
Rabbithole caches generated HTML pages in SQLite so each URL only needs to call the LLM API once. Without a shared, replicated database, each Fly.io Machine would maintain its own isolated in-memory or on-disk cache — meaning a page cached on machine A would be regenerated from scratch when the next request lands on machine B, incurring unnecessary API cost and latency.
LiteFS solves this by acting as a transparent passthrough filesystem. It intercepts SQLite's file I/O at the FUSE layer, captures each transaction as an LTX file, and streams those transactions to all replica nodes in real time. The result: every Machine in the cluster shares the same page cache, with each node holding a full local copy for fast reads.
Key properties relevant to Rabbithole:
/litefs/rabbithole.db like a normal SQLite file. No driver changes needed.auto_stop_machines) when using LiteFS. A machine restarted after being stopped may hold a stale LiteFS state and could win the primary lease, potentially causing data rollback.
| Option | Pros | Cons |
|---|---|---|
| In-memory HashMap | Zero setup, fast | Lost on restart; not shared across machines |
| Single SQLite on volume | Persistent, simple | Only one machine can mount; no replication |
| Postgres (Fly) | Mature, scalable | Heavier, requires separate app/cluster |
| LiteFS (chosen) | SQLite simplicity + replication; data local to app | Pre-1.0; requires FUSE; Consul dependency |