M

Memory

Anthropic's official knowledge-graph memory server with persistent entity storage.

Works with: Claude DesktopClaude CodeCursorWindsurfClineVS Code (Continue)Zed
Quick install
npx -y @modelcontextprotocol/server-memory

How to install the Memory MCP server

Add this to your Claude Desktop MCP configuration:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Add this to your Claude Code MCP configuration:

npx -y @modelcontextprotocol/server-memory

Add this to your Cursor MCP configuration:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Add this to your Windsurf MCP configuration:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Add this to your Cline MCP configuration:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Add this to your VS Code (Continue) MCP configuration:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Add this to your Zed MCP configuration:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

The Memory MCP server is Anthropic’s reference implementation of persistent AI memory. It exposes a simple knowledge graph where Claude stores entities (people, projects, tools), relations between them, and observations about each one. The data persists in a local JSON file across conversations, so anything Claude remembers survives every restart.

This is the cleanest way to give a model a sense of continuity. By default Claude has no memory of past chats. Adding the Memory server changes that without requiring a full RAG pipeline or vector database.

Why use it

The friction that makes AI clients feel disposable is the lack of recall. Every new conversation starts at zero. You retype your context, your stack, your preferences. The Memory server lets Claude remember “this user prefers Postgres over MongoDB” or “my team uses Linear, not Jira” once and then act on it forever.

Pair it with the GitHub MCP server for a dev workflow where Claude remembers your repo conventions. Pair it with ContextBolt and you get two layers: explicit facts you’ve told Claude (Memory), plus everything you’ve ever bookmarked (ContextBolt).

What it actually does

Three primitives: create entities, add relations between them, and attach observations to them. Claude calls these tools naturally during a conversation. You don’t have to teach it the API.

Practical patterns:

  • “Remember that I’m building a Chrome extension called ContextBolt with a $6/month Pro tier.” Claude creates an entity for the project and observations for the pricing.
  • “What do you remember about my work?” Claude queries the graph and surfaces relevant entities.
  • “Forget everything I told you about Project X.” Claude removes the entity and its relations.

Gotchas

The graph is local-first by design. If you want it shared across machines or accessible to Claude Code on a server, you’ll need to sync the JSON file yourself (Dropbox, Syncthing, a Git repo). There’s no built-in sync.

Memory is plain text. Don’t store secrets in it. If you tell Claude to remember an API key, that key sits in the JSON file in cleartext. Use a password manager instead.

The schema is intentionally minimal. If you want richer features (vector search, embeddings, semantic similarity), look at mem0 or build on top of Memory’s primitives.

Example prompts for the Memory MCP server

Remember that I'm working on a SaaS for solo founders.

What do you remember about my last project?

Add to memory: my team uses Postgres for analytics.

Also in Memory & Knowledge

Combine Memory with ContextBolt

Memory gives Claude one kind of memory. ContextBolt adds another: every tweet, post, and article you save across X, Reddit, and LinkedIn becomes searchable by meaning. Run both as MCP servers and Claude can pull from both layers in one prompt.

See ContextBolt →

Memory MCP server: FAQs

Is the Memory server made by Anthropic?

Yes. It lives in the official modelcontextprotocol/servers GitHub repo and is maintained by Anthropic. The schema is intentionally simple to demonstrate persistent memory patterns; community alternatives like mem0 add features like vector search.

Where is the memory data stored?

On disk locally as a JSON file. The server runs over stdio, which means it has no network access and your memory data never leaves your machine. The file location is configurable via the MEMORY_FILE_PATH environment variable.

How is it different from ContextBolt or mem0?

Memory stores explicit entities, relations, and observations. ContextBolt stores bookmarks with semantic search. mem0 stores conversation summaries with vector retrieval. They solve different memory problems and can run alongside each other in the same client.

Does it work in Claude Code?

Yes. Add it to your Claude Code MCP config the same way as Claude Desktop. Many developers use it to give long-running coding sessions persistent context about project conventions and decisions.

Can I edit the memory file directly?

Yes. The file is plain JSON. Edit it carefully, keep a backup, and restart the server to reload. For production use, treat the file like a database and back it up.