Skip to content

MCP

xmemory exposes a Model Context Protocol server over Streamable HTTP. Any MCP-compatible client — Claude Desktop, Cursor, Windsurf, pydantic-ai, LangChain, Mastra, or a plain SDK call — can connect and get access to xmemory’s read and write tools with no custom code.

Video tutorial coming soon — we’re putting together a step-by-step video walkthrough showing how to connect xmemory via MCP to products like Claude Code, ChatGPT, and Codex using their native MCP connectors. Stay tuned!


API key: To use xmemory APIs or integrations (including MCP), you need an API key. Please register your interest at https://xmemory.ai and we will reach out to give access. Copy and securely store the key. Never share your API key publicly.

The token is a bearer token that encodes which instance the session is bound to. You don’t pass instance_id in tool calls — the server resolves it from the token automatically.

Authorization: Bearer <your-token>

URLhttps://mcp.xmemory.ai/
TransportStreamable HTTP
AuthBearer token in the Authorization header

Any MCP client that supports Streamable HTTP can connect. Here is the minimal pattern:

{
"mcpServers": {
"xmemory": {
"url": "https://mcp.xmemory.ai/",
"headers": {
"Authorization": "Bearer <your-token>"
}
}
}
}

For framework-specific setup, see the integration guides: Pydantic, LangChain, Mastra AI.


The xmemory MCP server exposes 6 tools to instance connections.

Tool descriptions are dynamic — on each list_tools() call, the server fetches your instance’s schema and appends a summary of its object types and relations to each tool description. This means the LLM sees tool descriptions tailored to your specific instance, making it more likely to use the tools correctly.

Returns the instance ID bound to the current session (e.g. "inst_abc123").

Parameters: none.

Useful for display, logging, or confirming which instance the agent is operating on.

Returns the full instance schema as a JSON string — object types with their fields, relations, deduplication keys, and descriptions.

Parameters: none.

The LLM can call this to understand what kinds of data the instance stores, which helps it formulate better write and read calls.

Extracts structured entities from free-form text and persists them. Synchronous — blocks until the data is fully committed.

ParameterTypeDescription
textstringFree-form text containing facts to extract and remember

Returns {"status": "ok"} on success.

Internally, the server runs a two-phase pipeline: an LLM extracts structured objects according to your instance’s schema, then a diff engine compares them against existing data and applies inserts, updates, and deletes.

Because write blocks until committed, you can call read immediately after and get consistent results.

Same as write, but enqueues the operation and returns immediately with a write_id.

ParameterTypeDescription
textstringFree-form text containing facts to extract and remember

Returns {"status": "ok", "write_id": "<uuid>"}.

Important: do not call read immediately after write_async — the data may not be committed yet. Use write_status to poll, or use write (synchronous) when you need to read right after.

Checks the status of an async write previously submitted via write_async.

ParameterTypeDescription
write_idstringThe write ID returned by write_async

Returns:

{
"status": "ok",
"write_id": "<uuid>",
"write_status": "queued | processing | completed | failed | not_found",
"error_detail": "<string or null>",
"completed_at": "<ISO timestamp or null>"
}
write_statusMeaning
queuedWaiting to be picked up
processingCurrently being extracted and applied
completedSuccessfully committed — safe to read
failedExtraction or persistence failed; see error_detail
not_foundNo write with this ID exists

Queries the instance and returns a natural-language answer.

ParameterTypeDescription
querystringA natural-language question about the stored data

Returns a JSON string with an answer field — a human-readable response synthesized from the structured data (capped at 1,000 characters).

Internally, the server translates the question into SQL against the instance’s knowledge graph, executes it with automatic retry and empty-result verification, and formats the result into a plain-text answer.


Use write when you need to read the data back immediately — it blocks until committed, guaranteeing consistency.

Use write_async + write_status when throughput matters more than immediate consistency — the client isn’t blocked, and you can poll for completion later.


All tools return {"error": "<message>"} as a JSON string on failure rather than raising exceptions, so the MCP client always gets a parseable response. Common errors:

ErrorCause
"no instance bound to this session"Token is invalid or not linked to an instance
"text size (N bytes) exceeds maximum (M bytes)"Write payload too large (limit: 1 MB)
"write queue not ready"Background processor hasn’t started
"write failed: <detail>"Extraction or persistence failure