Quick answer

An LLM Wiki is a Markdown knowledge base your AI reads directly. Most tutorials assume you start with notes. Bookmarks are a richer starting point: pre-curated, broader in scope, and already linked to source material. Capture them with a tool like ContextBolt, expose them via MCP, and you have a wiki Claude can search by meaning.

In April 2026, Andrej Karpathy posted a GitHub gist that quietly changed how a lot of developers think about AI knowledge bases. He called it the LLM Wiki. The gist hit 5,000+ stars within days.

The idea is simple. Skip vector databases. Skip RAG. Keep your knowledge as plain Markdown files in folders, and let an LLM read it directly. Claude reads the files, understands them, updates them, and cross-links them. The wiki gets smarter every time you add to it.

It works. The problem is what people are starting from.

The hole in the LLM Wiki playbook

Every tutorial I have read in the last three weeks assumes the same thing: you already have notes. A vault somewhere with structured thinking already inside it.

Most people don’t. Most people have:

That last one is the one nobody is using. Which is wild, because it is the highest-signal source on the list.

A note is a thought you had. A bookmark is a vote. You read something, decided it mattered, and saved it. Multiply that by hundreds of saves and you have a curated knowledge graph. You just cannot search it.

Why bookmarks beat notes as a starting point

Karpathy’s argument is that an LLM is best at reasoning over content it has read end-to-end, and Markdown is just a portable format for that. The principle works on any structured source. The Wiki is the file format, not the content.

Bookmarks have three properties that make them a stronger starting point than notes for most people.

1. They already passed your filter. You did not bookmark every tweet. You bookmarked the ones you thought you might want again. That is the most expensive curation step, and you have already done it for free.

2. They span more ground than your notes. Your notes cover whatever you sat down to document. Your bookmarks cover whatever stopped you mid-scroll. That tends to be a much wider topic graph than any deliberate note-taking practice produces.

3. The original sources have context your notes don’t. A bookmarked thread links to the author, the original post, the replies, the replies of replies. An LLM reading that has more to work with than a 200-word note you wrote at 11pm.

This is not a universal claim. If you keep a serious Obsidian vault, do not abandon it. For the 95% of people who do not, bookmarks are where the actual knowledge already lives.

What a bookmark-powered LLM Wiki looks like

The shape is the same as Karpathy’s pattern. Just a different input.

A traditional LLM Wiki has three layers, as VentureBeat described it: a raw/ directory of immutable source documents, a wiki/ directory of LLM-generated and LLM-maintained pages, and a CLAUDE.md file that defines the schema and rules for both.

Translate that to bookmarks:

The output is the same as a Markdown wiki. The LLM reads it directly, finds what is relevant, and answers grounded in your own saves. The mechanics under the hood are different. The user experience is identical.

How to build one in five steps

This is the practical bit. Five steps, around 30 minutes if you are starting cold.

1

Pick your sources

For most knowledge workers, X is the highest-signal bookmark source. Reddit and LinkedIn add depth in specific domains. If you also have read-it-later content (Pocket exports, Instapaper, Matter), that goes into the same wiki later. Start with the platforms where you already save reflexively.

2

Capture the raw layer

Every saved post, with full content, captured locally. ContextBolt does this automatically across X, Reddit, and LinkedIn the moment you install the Chrome extension. If you prefer to roll your own, X allows manual scrolls of the bookmarks page; Reddit exposes a saved-posts JSON; LinkedIn requires the official Settings > Data Privacy export.

3

Tag and cluster

An LLM Wiki only works if the structure is good enough to navigate. For Markdown notes, you write headings and tags by hand. For bookmarks, you want this automated. ContextBolt assigns a topic and 2 to 4 tags to every bookmark using Claude Haiku, then groups them into clusters. The result is a topic graph you did not write but can search instantly.

4

Expose it to Claude via MCP

This is the step that turns a folder of saves into a wiki Claude can actually read. The Model Context Protocol lets Claude pull from external sources mid-conversation. Add the endpoint to your Claude Desktop or Claude Code config and your bookmarks become a tool in every conversation.

5

Query it like a wiki

Ask Claude what you have saved on a topic. Ask it to summarise everything you bookmarked from a specific account. Ask it to find the thread you remember reading three months ago about agent loops. The answer comes from your own curated content, not the public web, with sources cited.

If you want the full Markdown-based version using local files instead of bookmarks, the existing tutorials cover that ground well. Anthropic’s Claude Code docs walk through the file-reading mechanics, and the Karpathy gist itself is around 200 lines of CLAUDE.md plus rules. The point of this post is the source layer most people have already built but never use.

Notes-first vs bookmarks-first wikis

Both approaches are valid. They suit different people. This is the honest comparison.

DimensionNotes-first wikiBookmarks-first wiki
Starting effortHigh (you write the notes)Low (saves are already there)
VolumeLow to mediumMedium to high
Topic breadthNarrow (what you sit to write)Wide (what you scroll past)
Source attributionOften missingAlways present (the original URL)
MaintenanceActive (you keep writing)Passive (you keep saving)
Best forResearchers, long-form thinkersAnyone who saves content reflexively

The realistic answer for most people is to run both. Notes for original thinking. Bookmarks for content you have absorbed but not authored. The LLM Wiki pattern stitches them together if you point Claude at both sources.

When this approach falls down

Two cases where the bookmark-first wiki is the wrong tool.

You bookmark indiscriminately. If you save everything you scroll past without reading it, your wiki turns into noise. The bookmark-first approach assumes some baseline filtering. If you have 12,000 saves and only read 200 of them properly, the AI is going to surface a lot of half-relevant content. The fix is to be more deliberate about saving, not to scrap the system.

Your work depends on long-form synthesis you wrote yourself. If you are an academic writing a thesis or a novelist working from a personal canon, your own writing is the wiki. Bookmarks are a side input. The Karpathy approach with Markdown notes is the right starting point for you.

For everyone in between, which is most people working in tech, marketing, research, or knowledge work, bookmarks are the place to start.

The compounding effect

Karpathy’s strongest claim about the LLM Wiki is that it compounds. Add a new source and the LLM does not just file it away. It reads the wiki first, recognises the new content fits an existing page, and updates that page rather than creating a duplicate.

The bookmark-first version compounds in the same shape, just with different mechanics. Save a new tweet about agent memory. Topic clustering already knows you have an “AI agent memory” cluster. The new save lands inside it automatically. The semantic index updates. Next time you ask Claude what you have read about agent memory, the new save is already part of the answer.

You are not maintaining a wiki. You are saving content the way you already do. The wiki is what falls out of the process.

This is the part most knowledge management advice misses. The work is not in writing notes or curating folders. The work is in being able to find what you already kept. Semantic search on top of durable local storage is what closes the loop.

What this actually changes

If the LLM Wiki idea sticks, and the GitHub momentum suggests it will, the implication is bigger than personal productivity. It changes what an AI assistant is supposed to know.

The first generation of AI tools knew the public web. The second generation knew your conversations. The third generation knows your knowledge: your notes, your saves, your decisions, your archive.

Bookmarks are the easiest entry point because almost everyone already has them. They just have not been usable. With the right tool stack, that changes. The library you have been building for years finally becomes a thing your AI can read.

The Karpathy gist shows the pattern. Your bookmarks are the data set most ready to run on it.

Frequently asked questions

What is an LLM Wiki? +
An LLM Wiki is a personal knowledge base built as plain Markdown files that an LLM reads directly. Andrej Karpathy popularised the pattern in April 2026. The LLM acts as both reader and editor: it answers questions from your wiki, updates pages, and cross-links related ideas as you add new content.
Can bookmarks really work as a knowledge base? +
Yes. Bookmarks are a curated list of content you found important enough to save. With AI tagging and topic clustering, they become a structured knowledge graph. The catch is search. Most platforms make their saves write-only, which is what tools like ContextBolt fix by giving Claude an MCP endpoint to query them.
How is the LLM Wiki different from RAG? +
RAG retrieves chunks from a vector database based on a query. The LLM Wiki skips retrieval. The LLM reads the wiki directly, the same way a person reads a book. For personal knowledge bases under a few thousand pages, this often works better than RAG and removes the embedding pipeline entirely.
Do I need to use Markdown to make this work? +
Markdown is the standard because it is text, structured, and tool-agnostic. An LLM can read most plain text formats. Bookmarks captured by ContextBolt are stored as structured records with content, tags, topic, and platform metadata. They are not Markdown, but they work the same way for an LLM via MCP.
Do I need ContextBolt to build a bookmark wiki? +
No. You can manually export bookmarks from each platform, convert them to Markdown, and point Claude Code at the folder. ContextBolt automates the capture, AI tagging, topic clustering, and MCP endpoint so you can query the result from any Claude conversation. It is a shortcut, not a requirement.