Frequently asked questions about Rabbithole. Source at
github.com/ajbt200128/rabbithole.
Live demo at isarabbithole.com.
Note: This documentation site is itself generated by Rabbithole. The model generating these pages has no access to the actual source code and is working from a prompt. Some details may be imprecise. See
Why is the documentation sometimes inaccurate?
Jump to section:
Cost & Pricing
Does this cost money?
Yes. Rabbithole calls the Anthropic Claude API for each uncached page view. A typical page generation costs roughly $0.05–$0.50 per request depending on the model used (claude-3-5-sonnet, claude-3-haiku, etc.) and the complexity of the page prompt. Once a page is generated, it is cached indefinitely in SQLite and served for free on subsequent visits. A site with mostly repeat traffic will accrue costs only on the first hit per URL. A site where every URL is unique — or one that is crawled aggressively — can become expensive quickly. You are responsible for your own Anthropic API bill. Rabbithole itself is free and open-source.
Rough estimates by model (subject to Anthropic pricing changes):
claude-3-haiku ~$0.01–$0.05 per page (fast, cheaper, less capable)
claude-3-5-sonnet ~$0.05–$0.20 per page (default, good balance)
claude-3-opus ~$0.20–$0.50 per page (expensive, rarely needed)
These are rough approximations. Actual cost depends on prompt length, output length, and current Anthropic pricing. Check
anthropic.com/pricing for current rates.
LLM Providers
Can I use other LLM providers besides Anthropic?
Claude (via the Anthropic API) is the primary and currently the only first-class supported provider. The system prompt and output format expectations are tightly coupled to how Claude behaves. Support for other providers (OpenAI, Gemini, Ollama, etc.) is not built in, but the codebase is open-source and contributions are welcome. If you want to add a provider, the relevant integration point is in the page generation logic. See
the repository for details.
Which Claude model does Rabbithole use?
This is configurable. See
Configuration for the relevant environment variable or config key. The default is a recent Claude model; the documentation for the specific default may lag behind the actual code.
Bad Page Generation
What if a page generates badly — broken layout, wrong content, hallucinated information?
Delete the cached entry from the SQLite database and reload the page. Rabbithole will regenerate it on the next request. You can do this with a standard SQLite client:
sqlite3 cache.db "DELETE FROM pages WHERE url = '/your/path.html';"
The exact table and column names may vary; check the schema with .schema in the SQLite shell. After deletion, the next HTTP request to that URL will trigger a fresh generation.
Can I edit generated pages directly?
Yes. The cache is just a SQLite database. You can update the stored HTML directly. It will not be regenerated unless you delete the row. There is no integrity check preventing manual edits.
Production Suitability
Is Rabbithole suitable for production use?
It depends heavily on what "production" means for your use case. Rabbithole is an experimental tool. It is not a mature, battle-tested web framework. Some honest considerations:
- Cost unpredictability: If a bot or crawler hits uncached URLs, costs can spike. You should rate-limit aggressively and/or set spending caps on your Anthropic account.
- Content reliability: The LLM may generate incorrect information, broken HTML, or simply refuse to produce the requested content. There is no validation layer.
- Latency: First page load can take several seconds while the LLM generates. This is inherent to the architecture.
- Uptime dependency: If the Anthropic API is down or rate-limiting you, uncached pages will fail to load.
- Good fits: Demos, experiments, generative art projects, personal sites, sites where "good enough" content is acceptable, internal tools.
- Bad fits: High-traffic public sites, anything requiring factual accuracy, anything where downtime or incorrect content has real consequences.
Why Rust?
Why is Rabbithole written in Rust?
Because the developer wanted to write Rust. There is no deeper technical justification. A Python or Go implementation would work equally well for this use case. Rust does provide good performance for the HTTP serving layer and type-safety that catches certain classes of bugs at compile time, but these benefits are largely incidental. The honest answer is that it was a personal preference.
Does Rust provide meaningful performance benefits here?
Negligible. The bottleneck is the Anthropic API call, which typically takes 2–10 seconds. The Rust HTTP server overhead is microseconds. You would not notice a difference if this were rewritten in Python. The SQLite cache lookup is also fast regardless of language.
Documentation Accuracy
Why is the documentation on this site sometimes inaccurate or vague?
This documentation site is generated by Rabbithole itself. Each page is produced by Claude based on a text prompt describing what the page should contain. Claude does not have access to the Rabbithole source code, the actual configuration file schema, or any runtime data. It is writing plausible documentation based on a high-level description. As a result, specific details — exact config key names, precise API shapes, file paths, default values — may be wrong. The source of truth is always the
GitHub repository.
Is this intentional?
Yes. Running the documentation site on the tool being documented is a deliberate demonstration of the tool's capabilities and limitations. It is also somewhat recursive. The imprecision is a feature in the sense that it honestly demonstrates what Rabbithole is: a system that produces plausible, useful content, not verified factual content.
Latency
Why is the first page load so slow?
The first load triggers a synchronous call to the Anthropic API. The model generates the full HTML page before the response is sent to the browser. Depending on page complexity and model, this takes roughly 3–15 seconds. Subsequent loads of the same URL are served instantly from the SQLite cache. There is currently no streaming of partial HTML to the browser during generation.
Is there a timeout?
There is an HTTP server timeout. If the Anthropic API takes too long or errors, the server will return an error page. The specifics depend on your deployment configuration. Long-running generation requests may also hit proxy or CDN timeouts if you have those in front of Rabbithole.
Link Graphs & Infinite Generation
Can pages link to each other indefinitely, creating an infinite web?
In principle, yes. Each generated page can produce links to new URLs, and each of those will be generated on first visit. There is no built-in limit on the depth or breadth of the URL graph. This is partly the point — Rabbithole can produce an effectively unbounded explorable site. The practical limit is your API budget and patience.
What happens if two requests for the same uncached URL arrive simultaneously?
This is a potential race condition. Depending on implementation, both requests may trigger a generation call, and whichever completes last will write to the cache. This could result in duplicate API charges for a single URL. Check the repository's issue tracker for current status on this. Rate limiting at the reverse proxy level is a reasonable mitigation.
SQLite Cache
Why SQLite for caching? Why not Redis, Postgres, or the filesystem?
SQLite requires zero infrastructure. No separate database server, no daemon, no configuration. For a single-instance deployment, it is more than adequate. The generated pages are essentially static HTML blobs; SQLite handles this trivially. If you need to scale to multiple instances, you would need to replace the cache with something network-accessible, but that is a non-trivial architectural change and is not currently supported.
Forced Regeneration
Can I force regeneration of all pages?
Yes. Delete the cache database entirely and restart the server. Every page will regenerate on first access. Be aware this will incur API costs for all pages as they are revisited. To selectively regenerate, delete individual rows as described in the
bad page generation section above.
Context & Prompting
How much context does each generated page receive?
Each page is generated in isolation. The page generator receives: the seed prompt for that specific page (which you wrote when creating the link to it), the system prompt defining Rabbithole's output format, and whatever global context you encoded in the page prompt. There is no shared state between pages. This means consistency across pages depends entirely on how well the per-page prompts are written. If you want a consistent visual theme and navigation, every page prompt must describe those things explicitly.
Can I use web search or tool use within page generation?
Yes, optionally. Rabbithole supports giving the model access to web search and fetch tools during generation. This allows pages to include real, up-to-date information rather than relying solely on the model's training data. See
Web Tools for configuration details. Enabling tools increases generation time and cost.
Contributing
How do I contribute?
Open a pull request on
github.com/ajbt200128/rabbithole. Issues and feature requests are also welcome. The most wanted contributions are: additional LLM provider support, improved caching options, streaming response support, and better error handling for failed generations.
Is there a license?
Check the repository. License details are in the LICENSE file at the root of the project.