Every click goes deeper.
Rabbithole is an open-source Rust webserver that generates entire websites on the fly using large language models. You give it a single seed prompt describing a homepage. When a visitor hits a URL, the model generates the HTML for that page — along with links to new pages, each with their own generation prompts. Those linked pages are generated on demand when clicked, recursively, creating a fully explorable website from a single sentence. Pages are cached permanently after first generation, so repeat visitors get the same content. It is arguably the least efficient architecture for a website ever devised, but it does produce something genuinely interesting: a site that grows organically in response to what people actually click on. This very page was generated by Rabbithole, which is either a testament to the tool or a warning about it, depending on your perspective.
Source: github.com/ajbt200128/rabbithole — Live: isarabbithole.com
cargo run -- --seed "your prompt here". The server starts on port 8080 by default.http://localhost:8080/. The homepage generates. Click any link — that page generates too, on demand, using the prompt Rabbithole wrote for it.
New pages load through a brief loading screen while generation runs in the background.
A /__ready polling endpoint notifies the browser when the page is ready to serve.
Generation depth is tracked — the default limit is 5 levels deep. Pages beyond the limit
are still served if already cached; new pages at the limit generate but produce no further links.
# Install Rust if needed: https://rustup.rs
git clone https://github.com/ajbt200128/rabbithole
cd rabbithole
export ANTHROPIC_API_KEY=sk-ant-...
# Run with a seed prompt
cargo run -- --seed "A homepage about space exploration"
# Or load seed from a file
cargo run -- --seed-file seed.txt
# Recommended: set a cost budget to avoid surprise bills
cargo run -- --seed "..." --max-cost 5.00
# Use SQLite for persistence across restarts
cargo run -- --seed "..." --db site.db
Web tools (web_search, web_fetch) are on by default. Disable with --no-web-tools.
--db) for persistenceweb_search and web_fetch so the model can research real content and hotlink real images--max-cost budget cap; exceeding budget redirects to 404 instead of generating<script> logging prompt, depth, token count, cost, and generation time to the browser devtools (F12)--modelSet --max-cost unless you enjoy surprise API bills.
| Site | Concept | Style |
|---|---|---|
| isarabbithole.com | This site — Rabbithole's own project documentation | Plain, minimal, functional — like gcc.gnu.org |
| acapa.isarabbithole.com | ACAPA: American Competitive Apple Picking Association | Deliberately ugly early-2000s web aesthetic |
| cgpa.isarabbithole.com | CGPA: Cat Girl Program Analysis — niche forum on program analysis and type theory | Dark phpBB forum style |
Rabbithole instructs the model to produce a complete HTML document followed by a
---MAPPINGS--- delimiter and a JSON array of {url, prompt} objects.
The server parses the delimiter, serves the HTML, and stores the mappings for future visits.
Each mapping also carries a depth counter, incremented from the seed's depth of 1.
At the configured depth limit, the model is instructed to generate the page but produce an empty mappings array.
<!DOCTYPE html>
...full HTML page...
</html>
---MAPPINGS---
[{"url": "/some/page.html", "prompt": "Full context + page description..."}]
The system prompt instructs the model that each page is generated in complete isolation — prompts must carry all context (theme, style, lore, terminology) for consistency. Whether this actually works is, charitably, hit or miss.