P

Pieces

Long-term memory for developers, with snippet capture and IDE integration.

Works with: Claude DesktopCursorVS Code (Continue)
Quick install
npx -y @pieces-app/mcp-server

How to install the Pieces MCP server

Add this to your Claude Desktop MCP configuration:

{
  "mcpServers": {
    "pieces": {
      "command": "npx",
      "args": [
        "-y",
        "@pieces-app/mcp-server"
      ]
    }
  }
}

Add this to your Cursor MCP configuration:

{
  "mcpServers": {
    "pieces": {
      "command": "npx",
      "args": [
        "-y",
        "@pieces-app/mcp-server"
      ]
    }
  }
}

Add this to your VS Code (Continue) MCP configuration:

{
  "mcpServers": {
    "pieces": {
      "command": "npx",
      "args": [
        "-y",
        "@pieces-app/mcp-server"
      ]
    }
  }
}

The Pieces MCP server gives Claude access to your Pieces long-term memory. If you use Pieces’ developer tools (the desktop app, browser extension, or IDE plugins), it has been quietly building a record of your activity: code snippets you save, articles you read, conversations you have. The MCP server lets Claude query that record.

For developers who already use Pieces, this is the missing link. The data was being captured. Now Claude can use it.

Why use it

The big idea behind Pieces is that your “memory” of how you work is mostly a function of context you’ve already seen. The article you read last Thursday, the snippet you saved from Stack Overflow, the conversation you had in a vendor’s docs chat. Pieces captures it all locally. Claude with the MCP server queries across it.

The result is a coding assistant that remembers what you’ve actually been doing, not just what’s in the current file. “Where did I see that auth pattern last week?” becomes a real question Claude can answer.

What it actually does

Query saved snippets by content, tag, language, or context. Fetch the long-term memory feed, which is a chronological record of activity Pieces has captured. Filter by source (browser, IDE, chat). Optionally generate code suggestions grounded in your saved snippets.

Practical patterns:

  • “Find the Postgres migration snippet I saved last month.”
  • “What was that React pattern I bookmarked from the docs yesterday?”
  • “Summarize what I worked on this week using Pieces’ memory.”

Gotchas

PiecesOS has to be running for the MCP server to work. If the daemon stops, the server returns errors. Start PiecesOS at login if you want this to be reliable.

Cloud sync is opt-in. The local-first design is one of Pieces’ selling points, but it also means you don’t get cross-device memory unless you turn on sync. For solo developers on one machine this is fine. For teams or multi-device setups, enable sync deliberately.

For a complete memory setup, pair Pieces with ContextBolt for social bookmarks and either Memory or mem0 for explicit knowledge-graph entries. Pieces handles ambient capture, ContextBolt handles social, the knowledge-graph servers handle facts you state explicitly. Three different shapes of memory, all queryable from one prompt.

Also in Memory & Knowledge

Combine Pieces with ContextBolt

Pieces gives Claude one kind of memory. ContextBolt adds another: every tweet, post, and article you save across X, Reddit, and LinkedIn becomes searchable by meaning. Run both as MCP servers and Claude can pull from both layers in one prompt.

See ContextBolt →

Pieces MCP server: FAQs

Is the Pieces server official?

Yes. Pieces ships an official MCP server at @pieces-app/mcp-server. It runs against your local PiecesOS installation.

Does it work without the Pieces desktop app?

No. The MCP server is a thin wrapper around PiecesOS, the local daemon that does the actual capture and storage. You need PiecesOS running for the server to function.

What does Pieces actually capture?

Long-term memory captures code snippets you save explicitly, plus optional ambient capture of browser activity, chat conversations, and IDE actions. You control what's enabled. Everything stays local by default.

Does it send data to Pieces' servers?

Only if you opt into cloud sync. The default mode is fully local, processed by an on-device LLM. This makes it appropriate for sensitive code where you don't want to leak context to a third party.

How is this different from Anthropic's Memory server?

Anthropic's Memory server is a knowledge graph you populate by writing to it explicitly. Pieces is ambient capture — it watches what you do and builds memory automatically. Different shape of memory, different costs.