Rabbithole

Every click goes deeper.

Rabbithole is an open-source Rust webserver that generates entire websites on the fly using large language models. You give it a single seed prompt describing a homepage. When a visitor hits a URL, the model generates the HTML for that page — along with links to new pages, each with their own generation prompts. Those linked pages are generated on demand when clicked, recursively, creating a fully explorable website from a single sentence. Pages are cached permanently after first generation, so repeat visitors get the same content. It is arguably the least efficient architecture for a website ever devised, but it does produce something genuinely interesting: a site that grows organically in response to what people actually click on. This very page was generated by Rabbithole, which is either a testament to the tool or a warning about it, depending on your perspective.

Source: github.com/ajbt200128/rabbithole  —  Live: isarabbithole.com

How It Works

  1. Write a seed prompt. One sentence or a paragraph describing what your homepage should contain.
  2. Start the server. cargo run -- --seed "your prompt here". The server starts on port 8080 by default.
  3. Click and explore. Visit http://localhost:8080/. The homepage generates. Click any link — that page generates too, on demand, using the prompt Rabbithole wrote for it.

New pages load through a brief loading screen while generation runs in the background. A /__ready polling endpoint notifies the browser when the page is ready to serve. Generation depth is tracked — the default limit is 5 levels deep. Pages beyond the limit are still served if already cached; new pages at the limit generate but produce no further links.

Quick Start

# Install Rust if needed: https://rustup.rs
git clone https://github.com/ajbt200128/rabbithole
cd rabbithole

export ANTHROPIC_API_KEY=sk-ant-...

# Run with a seed prompt
cargo run -- --seed "A homepage about space exploration"

# Or load seed from a file
cargo run -- --seed-file seed.txt

# Recommended: set a cost budget to avoid surprise bills
cargo run -- --seed "..." --max-cost 5.00

# Use SQLite for persistence across restarts
cargo run -- --seed "..." --db site.db

Web tools (web_search, web_fetch) are on by default. Disable with --no-web-tools.

Features

  • On-demand generation — pages are created when first visited, then cached permanently
  • Depth limiting — configurable recursion cap (default: 5) prevents infinite expansion
  • Two storage backends — in-memory HashMap (default) or SQLite (--db) for persistence
  • Streaming SSE — real-time generation progress logs with token throughput and timing
  • Web tools — built-in web_search and web_fetch so the model can research real content and hotlink real images
  • Retry logic — retries up to 3 times on malformed output (which happens)
  • Cost tracking — atomic spend counter with --max-cost budget cap; exceeding budget redirects to 404 instead of generating
  • Debug console — every page injects a <script> logging prompt, depth, token count, cost, and generation time to the browser devtools (F12)
  • Configurable model — Claude Opus, Sonnet, or Haiku via --model
  • Fly.io deployment — includes Dockerfile, fly.toml, and litefs.yml for LiteFS-backed SQLite replication
  • CI/CD — GitHub Actions for tests + clippy + fmt, cross-compiled releases (Linux/macOS, amd64/arm64), and auto-deploy to Fly.io on push to main
  • No frontend framework — all generated HTML is self-contained with inline CSS and JS

Model Pricing Reference

  • Claude Opus: $15 / $75 per million tokens (input/output)
  • Claude Sonnet: $3 / $15 per million tokens
  • Claude Haiku: $0.80 / $4 per million tokens

Set --max-cost unless you enjoy surprise API bills.

Live Demos

Site Concept Style
isarabbithole.com This site — Rabbithole's own project documentation Plain, minimal, functional — like gcc.gnu.org
acapa.isarabbithole.com ACAPA: American Competitive Apple Picking Association Deliberately ugly early-2000s web aesthetic
cgpa.isarabbithole.com CGPA: Cat Girl Program Analysis — niche forum on program analysis and type theory Dark phpBB forum style

Tech Stack

Documentation

Response Format

Rabbithole instructs the model to produce a complete HTML document followed by a ---MAPPINGS--- delimiter and a JSON array of {url, prompt} objects. The server parses the delimiter, serves the HTML, and stores the mappings for future visits. Each mapping also carries a depth counter, incremented from the seed's depth of 1. At the configured depth limit, the model is instructed to generate the page but produce an empty mappings array.

<!DOCTYPE html>
...full HTML page...
</html>
---MAPPINGS---
[{"url": "/some/page.html", "prompt": "Full context + page description..."}]

The system prompt instructs the model that each page is generated in complete isolation — prompts must carry all context (theme, style, lore, terminology) for consistency. Whether this actually works is, charitably, hit or miss.