mem0 is an open-source memory layer for AI applications, exposed as an MCP server. It uses vector embeddings to provide semantic recall across past conversations: ask “what did we discuss about pricing?” and get answers grounded in earlier sessions, even if you didn’t use the exact word “pricing”.
Why use it
Vector memory feels different from explicit memory. With Anthropic’s Memory server, you tell Claude “remember X” and it stores X verbatim. With mem0, mem0 watches the conversation, summarizes, embeds, and retrieves by semantic similarity later. Less precise, but covers more ground.
For long-running projects where you don’t want to remember to “tell Claude” every key fact, mem0 is the lower-friction option.
What it actually does
Add a memory (or let mem0 auto-extract from conversation), search by semantic query, list memories, delete a memory, manage memory namespaces. The hosted version adds team-sharing primitives.
Practical patterns:
- “What do I usually prefer for state management in React projects?”
- “What was the verdict on whether to roll our own auth or use Better Auth?”
- “Recall any decisions I’ve made about pricing strategy.”
Gotchas
Embedding quality matters. Self-hosted with a small embedding model gives blurrier recall than hosted with a top-tier embedder. Test before you commit.
Vector databases can grow unbounded. mem0 has retention policies; use them. Don’t rely on mem0 to clean itself; review periodically.
Pair with ContextBolt for two layers of memory: one for what Claude learns from you (mem0), one for what you’ve captured externally (ContextBolt bookmarks). Claude reasons across both naturally.