github.com/ajbt200128/rabbithole — Open-source Rust tool for LLM-driven on-the-fly website generation — live demo: isarabbithole.com
Rabbithole is an open-source Rust server that generates entire websites on demand using the Anthropic Claude API. You provide a seed prompt describing your homepage; every other page is generated lazily the first time it is visited and cached permanently. Pages are isolated — each is produced by a separate LLM call with no shared state beyond what you encode in link prompts. Web search and fetch tools are enabled by default, letting Claude pull real-time content into generated pages.
This guide walks through everything you need to get from zero to a running site in a few minutes.
Rabbithole is built with Rust stable. Install the toolchain via rustup if you haven't already:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
After installation, make sure cargo and rustc are on your
PATH. Verify with:
rustc --version cargo --version
Rabbithole calls the Claude API for every page generation. You need an Anthropic account with API access. Obtain a key from the Anthropic console and export it:
export ANTHROPIC_API_KEY="sk-ant-..."
--max-cost before running publicly.
If you plan to use --db for persistent storage,
SQLite is required. It is available on most systems; on Debian/Ubuntu:
sudo apt install libsqlite3-dev
Clone the repository from GitHub:
git clone https://github.com/ajbt200128/rabbithole cd rabbithole
Build the release binary (recommended for production and better performance):
cargo build --release
The compiled binary will be at target/release/rabbithole.
Build times vary; on first compile, Rust will download and compile all dependencies.
Expect 1–3 minutes on a modern machine.
Alternatively, use cargo run --release -- [flags] to build and run in one step.
During development, omit --release for faster (but slower-running) builds:
cargo run -- --seed "My website about space exploration"
The minimum required argument is --seed (or --seed-file).
The seed is a plain-text prompt describing what the homepage of your generated site
should contain. The server starts on port 8080 by default.
cargo run --release -- --seed "A homepage about space exploration"
Or, using the compiled binary directly:
./target/release/rabbithole --seed "A homepage about space exploration"
You should see output similar to:
[INFO] Starting Rabbithole server on http://0.0.0.0:8080 [INFO] Seed prompt loaded (42 chars) [INFO] Web tools: enabled [INFO] Depth limit: 5 [INFO] Storage: in-memory
Open http://localhost:8080/ in your browser. The homepage generation begins immediately on first visit. See §8 Visiting the Site for how the loading screen works.
Use --port to run on a different port:
./target/release/rabbithole --seed "My site" --port 3000
For longer or more complex seed prompts, store the prompt in a plain-text file and
pass it with --seed-file. This is strongly recommended for any prompt
longer than a sentence or two — shell quoting and escaping become awkward quickly.
./target/release/rabbithole --seed-file ./prompts/space-homepage.txt
Example space-homepage.txt:
A richly detailed homepage for "Cosmos Explorer" — a website dedicated to space exploration news, mission archives, and educational content. The site has a dark theme (#0a0a1a background, white text, cyan accent #00e5ff). Navigation includes: Home, Missions, News, Solar System, Deep Space, About. Use a dense, information- rich layout similar to NASA's website circa 2010. All section headings in uppercase. Feature a "Mission of the Week" section and a "Latest Launches" feed. Link to at least 8 subpages. No hero banners wider than 600px.
--seed and --seed-file are mutually exclusive.
Using both will produce an error.
By default, Rabbithole stores all generated pages in memory. This means every time you restart the server, all cached pages are lost and will be regenerated on next visit (incurring API cost again).
Use --db with a file path to enable SQLite-backed persistence.
Pages generated in this session survive restarts:
./target/release/rabbithole \ --seed "A homepage about space exploration" \ --db ./site.db
On startup, Rabbithole will create the SQLite database file if it does not exist, or re-use it if it does. All previously generated pages will be served immediately from the database without any API calls.
| Storage Mode | Flag | Survives Restart? | Best For |
|---|---|---|---|
| In-memory (default) | none | No | Quick experiments, ephemeral demos |
| SQLite | --db <path> |
Yes | Development, long-running sites, cost control |
Each page generation calls the Claude API and consumes tokens. Costs can add up
unexpectedly, especially on sites with many pages or deep recursion.
The --max-cost flag sets a hard USD spending cap; once reached,
new (uncached) page requests will return an error rather than triggering an API call.
./target/release/rabbithole \ --seed "A homepage about space exploration" \ --db ./site.db \ --max-cost 5.00
--max-cost before exposing
Rabbithole to any external traffic or leaving it running unattended. Without it,
there is no upper bound on API spend.
The cost counter is tracked in-memory per process. It resets on restart unless you
combine it with --db and a persistent cost log (see
Configuration reference for details).
| Scenario | Approx. Pages | Est. Cost (USD) |
|---|---|---|
| Quick demo, no web tools | 5–15 | $0.10–$0.40 |
| Medium site, web tools on | 20–50 | $0.50–$2.00 |
| Large site, deep recursion | 50–200+ | $2.00–$10.00+ |
Estimates based on Claude 3.5 Sonnet pricing as of early 2025. Actual costs vary by prompt length, model, and web tool usage.
Rabbithole passes two tools to Claude when generating each page:
These tools are enabled by default. When active, Claude can search for and incorporate real, up-to-date information into generated pages — news articles, documentation, reference data, images (hotlinked), and more. This significantly increases the quality and accuracy of content-heavy pages.
Disable web tools with the --no-web-tools flag:
./target/release/rabbithole \ --seed "A homepage about space exploration" \ --no-web-tools
Reasons you might disable web tools:
See Web Tools documentation for more detail on how search results are injected into the prompt, rate limiting, and tool call behavior.
When you visit any URL that has not yet been generated, Rabbithole immediately
serves a loading placeholder page while generation runs in the background.
The placeholder page polls the /__ready endpoint using JavaScript:
GET /__ready?path=/your/page/path
The /__ready endpoint returns HTTP 200 with {"ready": true}
once the page has been generated and cached, or {"ready": false} if
generation is still in progress. The loading page polls this every 1–2 seconds and
automatically reloads when the page becomes ready.
GET /some/page.htmlGET /__ready?path=/some/page.html every ~1.5s---MAPPINGS--- delimiter + JSON link map/__ready returns {"ready": true}
Each page has an associated depth — the number of link hops from the seed homepage.
The default depth limit is 5. When a page at depth 5 would generate
links to further pages, those links are still rendered as <a href>
in the HTML, but clicking them will return a static "maximum depth reached" page
rather than triggering a new generation. This prevents infinite recursive site expansion.
Override the depth limit with --max-depth:
./target/release/rabbithole --seed "..." --max-depth 3
Every generated page embeds debug metadata in the HTML source (visible in browser
DevTools). Open DevTools with F12 and check the <head>
for a <!-- rabbithole-debug --> comment block, or look at the
Console tab where Rabbithole logs structured metadata as a JavaScript
object on page load.
The debug output includes:
| Field | Description |
|---|---|
prompt |
The full prompt string used to generate this page |
depth |
This page's depth from the seed root (0 = homepage) |
input_tokens |
Number of tokens in the prompt sent to Claude |
output_tokens |
Number of tokens in Claude's response |
cost_usd |
Estimated USD cost of this page generation |
generation_ms |
Wall-clock time in milliseconds for generation |
cached |
true if served from cache, false if freshly generated |
web_tools |
Whether web_search/web_fetch were enabled for this generation |
Example console output:
rabbithole {
prompt: "A page about the Apollo 11 mission. Site: Cosmos Explorer...",
depth: 2,
input_tokens: 4821,
output_tokens: 3104,
cost_usd: 0.0412,
generation_ms: 8341,
cached: false,
web_tools: true
}
You're up and running. Here are some directions to explore:
| Flag | Type | Default | Description |
|---|---|---|---|
--seed <TEXT> |
string | required* | Inline seed prompt for the homepage |
--seed-file <PATH> |
path | required* | Path to file containing the seed prompt |
--db <PATH> |
path | in-memory | SQLite database file for persistent page cache |
--max-cost <USD> |
float | unlimited | Hard cap on total API spend in USD |
--max-depth <N> |
int | 5 | Maximum link depth from homepage |
--no-web-tools |
flag | off | Disable web_search and web_fetch tools |
--port <N> |
int | 8080 | HTTP server port |
* One of --seed or --seed-file is required; they are mutually exclusive.