Claude.ai’s built-in memory (free since March 2026) is the easiest starting point. For deeper persistence, Claude Projects, CLAUDE.md files, and MCP servers each solve a different problem. Built-in memory stores preferences. MCP connects Claude to your actual knowledge. You need both.
Claude finally has memory. On 2 March 2026, Anthropic activated memory for all Claude users, including the free tier. Claude now synthesises facts from your conversations and carries them into future sessions automatically.
It is genuinely useful. Claude stops asking what industry you work in. It remembers you prefer British English. It knows you’re building a side project in the evenings.
But here is the uncomfortable truth: built-in memory is a preferences notepad. It captures what you’ve said to Claude. It doesn’t know what you’ve read, saved, or researched. The threads you bookmarked last month. The documents you’ve built up over a year. The domain knowledge you’ve accumulated across thousands of saved posts.
That’s a separate problem. And it needs a separate solution.
This guide covers all 9 methods. Use the comparison table to find what fits your setup, then dive into whichever methods are new to you.
The 9 methods at a glance
| Method | Works in | Setup effort | Type of memory | Cost |
|---|---|---|---|---|
| 1. Claude.ai built-in memory | Claude.ai | Zero (automatic) | Preferences + habits | Free |
| 2. Claude Projects | Claude.ai | Low | Scoped instructions + docs | Free |
| 3. CLAUDE.md files | Claude Code | Low | Session-level context | Free |
| 4. Auto Memory | Claude Code v2.1.59+ | Zero (automatic) | Session learnings | Free |
| 5. MCP memory servers | Claude Code / Desktop | Medium | Semantic knowledge store | Free/paid |
| 6. Filesystem MCP | Claude Code / Desktop | Low | Local files as live context | Free |
| 7. ContextBolt MCP | Claude Code / Desktop | Low | Curated bookmark knowledge | £4/mo |
| 8. Memory API Tool | API (developers) | High | App-level persistence | API costs |
| 9. Context document | Anywhere | Low | Manual reference | Free |
1. Claude.ai built-in memory
Best for: Anyone who uses Claude.ai regularly and wants zero-setup persistence.
This is the one everyone has been waiting for. Anthropic rolled out memory to Team and Enterprise plans in September 2025, reached Pro and Max users in October 2025, and activated it for the free tier on 2 March 2026.
Here is how it works. Claude scans your conversation history roughly every 24 hours. It distils long-term-worthy facts into a synthesised summary: your profession, tools you use, recurring topics, language preferences. When you start a new conversation, that summary is loaded automatically. Claude already knows the context before you type a single word.
You can see exactly what Claude remembers. Go to your Claude.ai account settings, find the Memory section, and read every item stored. You can edit any entry, delete what’s stale, or add facts manually without waiting for a conversation to trigger it.
The limitation is scope. Built-in memory captures who you are and how you like to work. It doesn’t capture what you know. It will remember that you work in AI research. It won’t remember the 200 papers you’ve read about transformer architectures.
- Zero setup, works automatically from day one
- Available on all plans including free
- Full transparency: see, edit, and delete every memory
- Carries across all claude.ai conversations
- Only works inside claude.ai, not Claude Code or other tools
- Captures preferences, not external knowledge
- Memory lives on Anthropic’s servers, not locally
Verdict: The best starting point for most Claude users. Not the whole answer.
2. Claude Projects with custom instructions
Best for: Anyone doing recurring work inside a defined domain.
Claude Projects give you a persistent workspace. You write a set of instructions once, upload supporting documents, and every conversation inside that Project starts with those loaded automatically.
The instructions stack. Your profile-level memory loads first. Project instructions add a layer on top. That means you write universal preferences once (in memory), then project-specific rules in each Project. You don’t repeat yourself.
Think of Projects as scoped long-term memory. A writing project can hold your style guide, tone of voice doc, and audience brief. A coding project can hold your architecture decisions, naming conventions, and a description of the codebase. A research project can hold the source documents you reference repeatedly.
The practical power is in the knowledge base. Upload PDFs, Markdown files, or text documents. Claude cites them directly. You don’t have to paste context every session. It’s already there.
- Free on all Claude.ai plans
- Upload documents as persistent reference material
- Instructions persist automatically across all Project conversations
- Stacks with built-in memory for layered context
- Limited to claude.ai, not transferable to other tools
- Requires manual curation and upkeep
- Knowledge base has file size limits per Project
Verdict: Underused by most people. If you do the same type of work repeatedly, set up a Project for it.
3. CLAUDE.md files
Best for: Claude Code users who want session-level persistence without any plugins.
CLAUDE.md is a plain Markdown file that Claude Code reads automatically at the start of every session. It functions as a persistent system prompt. Whatever you write there is injected into every Claude Code session in that directory.
Put it at the root of your project and Claude knows the architecture, tech choices, naming conventions, and any context that would otherwise require 10 minutes of explanation at the start of every session. Place CLAUDE.md files in subdirectories for more granular, scoped instructions.
Critically, CLAUDE.md survives context compaction. When you run /compact, Claude re-reads the file from disk and re-injects it. The session context gets compressed, but the foundational instructions stay intact.
This is the simplest and most reliable form of persistence in Claude Code. No plugins. No MCP servers. Just a text file.
- No setup beyond creating the file
- Survives context compaction
- Version-controllable alongside your code
- Works with any version of Claude Code
- You have to write and maintain it manually
- Claude Code only, not claude.ai
- Nested CLAUDE.md files in subdirectories are not re-injected after compaction
Verdict: If you use Claude Code and don’t have a CLAUDE.md, create one today. It’s the most impactful five minutes you’ll spend on this list.
4. Claude Code Auto Memory
Best for: Claude Code users on v2.1.59 or later who want persistence without writing anything themselves.
Auto Memory is built into Claude Code. When active, Claude writes notes to memory files during sessions: build commands it figures out, architecture patterns it discovers, debugging insights, your preferences about code style. These notes persist between sessions so Claude isn’t starting from zero each time.
The related feature is Auto Dream. Periodically, Claude reviews every memory file, prunes what’s stale, resolves contradictions, and reorganises the rest. It converts relative dates to absolute timestamps, merges overlapping entries, and discards anything no longer relevant. That’s automated memory hygiene, handled for you.
Both features require Claude Code v2.1.59 or later. Check your version and update if needed.
- No manual effort once set up
- Captures project-specific learnings automatically
- Auto Dream keeps memory accurate over time
- Requires Claude Code v2.1.59 or later
- Less predictable than manually written CLAUDE.md content
- Claude Code only
Verdict: A good complement to CLAUDE.md rather than a replacement. Use both.
5. MCP memory servers
Best for: Power users who want a persistent semantic knowledge store across all their AI tools.
MCP memory servers are third-party tools that give Claude a proper external memory layer. They store facts, notes, and context in a local database (usually SQLite or a vector store), expose them via an MCP endpoint, and inject relevant context into conversations automatically.
The most widely used options in 2026 are:
Mem0 / OpenMemory MCP. Mem0’s OpenMemory MCP stores memories in a local database with semantic search. You can tell Claude “remember that we use PostgreSQL on this project” and it’s retrievable in any future session. Agents using Mem0 reportedly show 90% lower token usage compared to loading full context each time.
mcp-memory-service. An open-source option with a knowledge graph backend. Supports REST API access and autonomous memory consolidation. Useful for teams running their own infrastructure.
Both options require some initial setup: installing the server, adding it to your Claude config, and optionally configuring what triggers a memory save. Once running, they’re mostly hands-off.
- Semantic retrieval: finds relevant memories by meaning, not keyword
- Works across Claude Code, Claude Desktop, and any MCP-compatible tool
- Local-first options available (your data stays on your machine)
- Medium setup effort compared to built-in options
- Third-party tools with their own update and maintenance cycles
- Some options have recurring costs
Verdict: The right choice if you want persistent memory that travels with you across every AI tool, not just Claude.ai.
6. Filesystem MCP
Best for: Anyone with an existing library of notes, research, or documents they want Claude to access live.
The Filesystem MCP server is simple: point it at a directory on your machine and Claude can read, search, and reference any file inside it during a conversation.
The use case is broader than it sounds. If you keep a folder of research notes in Markdown, Claude can query them. If you have a running log of project decisions, Claude can reference it. If you maintain a personal knowledge base as local files, the Filesystem MCP turns it into live AI context without any data leaving your machine.
The limitation is that Claude can read files but it doesn’t understand their full content until you ask. It’s less “Claude knows everything in that folder” and more “Claude can look things up in that folder when relevant.”
- Free and local-first
- Works with any file format Claude can read
- Zero data leaves your machine
- Works with Claude Code and Claude Desktop
- Reactive rather than proactive: Claude looks up files when prompted
- No semantic search within files unless combined with a memory server
- You have to maintain the files yourself
Verdict: Excellent if you already have a local knowledge base. Less useful if you don’t maintain one.
7. ContextBolt MCP (bookmark knowledge)
Best for: Power users whose curated reading from X, Reddit, and LinkedIn is a genuine knowledge asset.
This is the method nobody else is writing about yet.
You’ve bookmarked hundreds of posts. Threads about AI techniques, Reddit discussions on tools, LinkedIn insights from people you follow. That collection represents years of curated expertise. Most tools can’t touch it.
ContextBolt captures your bookmarks from X, Reddit, and LinkedIn, AI-tags each one with a topic, and gives Pro users an MCP endpoint. Add it to your Claude config and suddenly your entire bookmark library is searchable inside any Claude conversation.
Ask Claude “what have I saved about context engineering?” and it searches your bookmarks by meaning, not keywords. Ask it to list everything you’ve bookmarked on a topic and it queries your actual saved content, not the public web.
The four MCP tools available are: search_bookmarks (semantic search across your full library), list_clusters (all your auto-generated topic clusters), get_cluster_bookmarks (bookmarks by topic), and get_recent_bookmarks (latest saves by platform).
This is a qualitatively different type of memory from anything else on this list. It’s not what you’ve told Claude. It’s what you’ve found important enough to save. That’s a stronger signal than conversation history.
- Turns years of curated bookmarks into searchable AI context
- Semantic search finds relevant saves even without exact keywords
- Covers X, Reddit, and LinkedIn in one endpoint
- Local-first storage: content preserved even if original post is deleted
- Pro plan required (£4/month)
- Chrome extension needed for capture
- Only as good as your bookmark curation habits
Verdict: The highest-signal type of external memory on this list. Your bookmarks are a curated knowledge graph you’ve been building for years. This is the search layer for it.
8. Anthropic Memory API Tool
Best for: Developers building Claude-powered apps that need persistent memory across user sessions.
The Anthropic Memory Tool (type: memory_20250818) is an API-level feature for developers, not an end-user setting. It gives Claude the ability to create, read, update, and delete memory files during a conversation, with those files persisting between API calls.
If you’re building a Claude-powered product and want users to have memory that survives across sessions, this is how you wire it up. The tool supports six operations: view, create, str_replace, insert, delete, and rename. Memory lives in a file directory you define and manage.
For personal Claude users, this isn’t directly accessible. It’s relevant if you’re building something on top of the API.
- Official Anthropic solution, not a third-party workaround
- Flexible: full read/write/delete control over memory files
- Designed for production use at scale
- Developer-facing only, requires API access and custom integration
- Not accessible to end users directly
- Incurs API costs
Verdict: Relevant if you’re building on the Anthropic API. Skip it if you’re using Claude as a personal tool.
9. Personal context document
Best for: Anyone who wants reliable persistence without installing anything.
The oldest trick in the book and still underrated. Write a Markdown document about yourself. Your role, goals, current projects, important decisions, communication preferences, domain knowledge. Paste it or attach it at the start of conversations where context matters.
It sounds manual because it is. But it is also completely reliable. No plugins, no servers, no sync issues. Just a document that you keep updated and paste when needed.
Some people maintain this as a “personal README” and update it every week or two. Others paste it at the start of every Claude session and delete it once context is established. Either approach works.
The smarter version: keep the document in a folder that’s also connected via Filesystem MCP. Then you have both the manual option (paste it yourself) and the automated option (Claude looks it up when prompted).
- Zero technical setup, works everywhere
- Completely under your control
- Portable across Claude, ChatGPT, Gemini, and any other tool
- Manual process: you have to remember to paste it
- Gets stale if you don’t maintain it
- Adds to the context window rather than retrieving on demand
Verdict: An underrated fallback that most people skip. Worth maintaining even if you use the technical methods above.
How to choose the right method
Pick the lines that sound most like you.
If you use Claude.ai for general work: Turn on built-in memory (it’s on by default since March 2026) and set up a Project for any recurring work area. That covers 80% of everyday use.
If you use Claude Code daily: Write a CLAUDE.md file for every project you work in. Update it when you learn something important. Enable Auto Memory if you’re on v2.1.59 or later. That’s your baseline.
If you want Claude to know your research and reading history: Use Filesystem MCP for local notes, ContextBolt MCP for your social bookmarks. These are the two highest-signal external memory sources for most knowledge workers.
If you want full semantic memory that travels across all AI tools: Add Mem0 or mcp-memory-service. More setup effort, but the most flexible option long-term.
If you’re building a product on the API: Look at the Anthropic Memory Tool. Everything else on this list is for personal use.
If you want something that works right now without installing anything: Write a personal context document and paste it at the start of sessions. It is the lowest-tech option and it works.
The real gap in Claude’s memory
Here is what most guides miss.
Claude’s built-in memory captures your habits. Projects capture your instructions. CLAUDE.md captures your working context. All of those are about how you work.
None of them capture what you know.
The knowledge you’ve assembled over years of reading, bookmarking, annotating, and researching is not accessible to Claude through any of these methods. It lives in your saved posts, your note folders, your bookmarked threads. That’s the gap.
MCP is how you close it. The Filesystem server connects your notes. ContextBolt connects your curated social saves. Together, they give Claude access to the knowledge base you’ve actually built, not just the preferences you’ve expressed in chat.
Memory without knowledge context is useful but incomplete. The full picture requires both.