Memory MCP

Memory MCP

A Model Context Protocol server for managing LLM conversation memories with intelligent context window caching.

10
Stars
6
Forks
10
Watchers
0
Issues
Memory MCP provides a Model Context Protocol (MCP) server for logging, retrieving, and managing memories from large language model (LLM) conversations. It offers features such as context window caching, relevance scoring, and tag-based context retrieval, leveraging MongoDB for persistent storage. The system is designed to efficiently archive, score, and summarize conversational context, supporting external orchestration and advanced memory management tools. This enables seamless handling of conversation history and dynamic context for enhanced LLM applications.

Key Features

Store and retrieve labeled conversation memories
Append new memories without overwriting
Clear all stored memories
Archive and retrieve context messages by conversation
Automatic and manual context window caching
Relevance scoring of context against conversations
Tag-based categorization and search for context
Summarize and link archived context items
Persistent MongoDB backend integration
CLI demo for interactive orchestration

Use Cases

Persisting LLM conversation histories for future reference
Dynamic context window management in AI chat applications
Retrieving the most relevant conversational context for ongoing dialogue
Segmented and tagged archival of conversation data for analysis
Clearing or updating conversation memory stores as needed
Automated orchestration of conversational context for agents
Summarizing previous conversation content for AI assistants
Providing memory continuity in multi-session language model deployments
Real-time retrieval and scoring of relevant context for LLM decision making
Enhancing user personalization by storing user-identified memories

README

Memory MCP

Trust Score

A Model Context Protocol (MCP) server for logging and retrieving memories from LLM conversations with intelligent context window caching capabilities.

Features

  • Save Memories: Store memories from LLM conversations with timestamps and LLM identification
  • Retrieve Memories: Get all stored memories with detailed metadata
  • Add Memories: Append new memories without overwriting existing ones
  • Clear Memories: Remove all stored memories
  • Context Window Caching: Archive, retrieve, and summarize conversation context
  • Relevance Scoring: Automatically score archived content relevance to current context
  • Tag-based Search: Categorize and search context by tags
  • Conversation Orchestration: External system to manage context window caching
  • MongoDB Storage: Persistent storage using MongoDB database

Installation

Option 1: Install from npm (Recommended)

bash
npm install -g @jamesanz/memory-mcp

The package will automatically configure Claude Desktop on installation.

Option 2: Install from source

  1. Install dependencies:
bash
npm install
  1. Build the project:
bash
npm run build

Configuration

Set the MongoDB connection string via environment variable:

bash
export MONGODB_URI="mongodb://localhost:27017"

Default: mongodb://localhost:27017

Usage

Running the MCP Server

Start the MCP server:

bash
npm start

Running the Conversation Orchestrator Demo

Try the interactive CLI demo:

bash
npm run cli

The CLI demo allows you to:

  • Add messages to simulate conversation
  • See automatic archiving when context gets full
  • Trigger manual archiving and retrieval
  • Create summaries of archived content
  • Monitor conversation status and get recommendations

Basic Memory Tools

  1. save-memories: Save all memories to the database, overwriting existing ones

    • memories: Array of memory strings to save
    • llm: Name of the LLM (e.g., 'chatgpt', 'claude')
    • userId: Optional user identifier
  2. get-memories: Retrieve all memories from the database

    • No parameters required
  3. add-memories: Add new memories to the database without overwriting existing ones

    • memories: Array of memory strings to add
    • llm: Name of the LLM (e.g., 'chatgpt', 'claude')
    • userId: Optional user identifier
  4. clear-memories: Clear all memories from the database

    • No parameters required

Context Window Caching Tools

  1. archive-context: Archive context messages for a conversation with tags and metadata

    • conversationId: Unique identifier for the conversation
    • contextMessages: Array of context messages to archive
    • tags: Tags for categorizing the archived content
    • llm: Name of the LLM (e.g., 'chatgpt', 'claude')
    • userId: Optional user identifier
  2. retrieve-context: Retrieve relevant archived context for a conversation

    • conversationId: Unique identifier for the conversation
    • tags: Optional tags to filter by
    • minRelevanceScore: Minimum relevance score (0-1, default: 0.1)
    • limit: Maximum number of items to return (default: 10)
  3. score-relevance: Score the relevance of archived context against current conversation context

    • conversationId: Unique identifier for the conversation
    • currentContext: Current conversation context to compare against
    • llm: Name of the LLM (e.g., 'chatgpt', 'claude')
  4. create-summary: Create a summary of context items and link them to the summary

    • conversationId: Unique identifier for the conversation
    • contextItems: Context items to summarize
    • summaryText: Human-provided summary text
    • llm: Name of the LLM (e.g., 'chatgpt', 'claude')
    • userId: Optional user identifier
  5. get-conversation-summaries: Get all summaries for a specific conversation

    • conversationId: Unique identifier for the conversation
  6. search-context-by-tags: Search archived context and summaries by tags

    • tags: Tags to search for

Example Usage in LLM

Basic Memory Operations

  1. Save all memories (overwrites existing):

    User: "Save all my memories from this conversation to the MCP server"
    LLM: [Uses save-memories tool with current conversation memories]
    
  2. Retrieve all memories:

    User: "Get all my memories from the MCP server"
    LLM: [Uses get-memories tool to retrieve stored memories]
    

Context Window Caching Workflow

  1. Archive context when window gets full:

    User: "The conversation is getting long, archive the early parts"
    LLM: [Uses archive-context tool to store old messages with tags]
    
  2. Score relevance of archived content:

    User: "How relevant is the archived content to our current discussion?"
    LLM: [Uses score-relevance tool to evaluate archived content]
    
  3. Retrieve relevant archived context:

    User: "Bring back the relevant archived information"
    LLM: [Uses retrieve-context tool to get relevant archived content]
    
  4. Create summaries for long conversations:

    User: "Summarize the early parts of our conversation"
    LLM: [Uses create-summary tool to condense archived content]
    

Conversation Orchestration System

The ConversationOrchestrator class provides automatic context window management:

Key Features

  • Automatic Archiving: Archives content when context usage reaches 80%
  • Intelligent Retrieval: Retrieves relevant content when usage drops below 30%
  • Relevance Scoring: Uses keyword overlap to score archived content relevance
  • Smart Tagging: Automatically generates tags based on content keywords
  • Conversation State Management: Tracks active conversations and their context
  • Recommendations: Provides suggestions for optimal context management

Usage Example

typescript
import { ConversationOrchestrator } from "./orchestrator.js";

const orchestrator = new ConversationOrchestrator(8000); // 8k word limit

// Add a message (triggers automatic archiving/retrieval)
const result = await orchestrator.addMessage(
  "conversation-123",
  "This is a new message in the conversation",
  "claude",
);

// Check if archiving is needed
if (result.archiveDecision?.shouldArchive) {
  await orchestrator.executeArchive(result.archiveDecision, result.state);
}

// Check if retrieval is needed
if (result.retrievalDecision?.shouldRetrieve) {
  await orchestrator.executeRetrieval(result.retrievalDecision, result.state);
}

Database Schema

Basic Memory Structure

typescript
type BasicMemory = {
  _id: ObjectId;
  memories: string[]; // Array of memory strings
  timestamp: Date; // When memories were saved
  llm: string; // LLM identifier (e.g., 'chatgpt', 'claude')
  userId?: string; // Optional user identifier
};

Extended Memory Structure (Context Caching)

typescript
type ExtendedMemory = {
  _id: ObjectId;
  memories: string[]; // Array of memory strings
  timestamp: Date; // When memories were saved
  llm: string; // LLM identifier
  userId?: string; // Optional user identifier
  conversationId?: string; // Unique conversation identifier
  contextType?: "active" | "archived" | "summary";
  relevanceScore?: number; // 0-1 relevance score
  tags?: string[]; // Categorization tags
  parentContextId?: ObjectId; // Reference to original content for summaries
  messageIndex?: number; // Order within conversation
  wordCount?: number; // Size tracking
  summaryText?: string; // Condensed version
};

Context Window Caching Workflow

The orchestration system automatically:

  1. Monitors conversation length and context usage
  2. Archives content when context usage reaches 80%
  3. Scores relevance of archived content against current context
  4. Retrieves relevant content when usage drops below 30%
  5. Creates summaries to condense very long conversations

Key Features

  • Conversation Grouping: All archived content is linked to specific conversation IDs
  • Relevance Scoring: Simple keyword overlap scoring (can be enhanced with semantic similarity)
  • Tag-based Organization: Categorize content for easy retrieval
  • Summary Linking: Preserve links between summaries and original content
  • Backward Compatibility: All existing memory functions work unchanged
  • Automatic Management: No manual intervention required for basic operations

Development

To run in development mode:

bash
npm run build
node build/index.js

To run the CLI demo:

bash
npm run cli

License

ISC

Star History

Star History Chart

Repository Owner

JamesANZ
JamesANZ

User

Repository Details

Language TypeScript
Default Branch main
Size 23 KB
Contributors 2
License MIT License
MCP Verified Nov 12, 2025

Programming Languages

TypeScript
74.86%
JavaScript
25.14%

Tags

Topics

llm-memory llms mcp-server

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Agentic Long-Term Memory with Notion Integration

    Agentic Long-Term Memory with Notion Integration

    Production-ready agentic long-term memory and Notion integration with Model Context Protocol support.

    Agentic Long-Term Memory with Notion Integration enables AI agents to incorporate advanced long-term memory capabilities using both vector and graph databases. It offers comprehensive Notion workspace integration along with a production-ready Model Context Protocol (MCP) server supporting HTTP and stdio transports. The tool facilitates context management, tool discovery, and advanced function chaining for complex agentic workflows.

    • 4
    • MCP
    • ankitmalik84/Agentic_Longterm_Memory
  • MCP CLI

    MCP CLI

    A powerful CLI for seamless interaction with Model Context Protocol servers and advanced LLMs.

    MCP CLI is a modular command-line interface designed for interacting with Model Context Protocol (MCP) servers and managing conversations with large language models. It integrates with the CHUK Tool Processor and CHUK-LLM to provide real-time chat, interactive command shells, and automation capabilities. The system supports a wide array of AI providers and models, advanced tool usage, context management, and performance metrics. Rich output formatting, concurrent tool execution, and flexible configuration make it suitable for both end-users and developers.

    • 1,755
    • MCP
    • chrishayuk/mcp-cli
  • VictoriaMetrics MCP Server

    VictoriaMetrics MCP Server

    Model Context Protocol server enabling advanced monitoring and observability for VictoriaMetrics.

    VictoriaMetrics MCP Server implements the Model Context Protocol (MCP) to provide seamless integration with VictoriaMetrics, allowing advanced monitoring, data exploration, and observability. It offers access to almost all read-only APIs, as well as embedded documentation for offline usage. The server facilitates comprehensive metric querying, cardinality analysis, alert and rule testing, and automation capabilities for engineers and tools.

    • 87
    • MCP
    • VictoriaMetrics-Community/mcp-victoriametrics
  • Raindrop.io MCP Server

    Raindrop.io MCP Server

    Enable LLMs to manage and search Raindrop.io bookmarks via the Model Context Protocol.

    Raindrop.io MCP Server is an integration that allows large language models to interact with Raindrop.io bookmarks using the Model Context Protocol. It provides tools to create and search bookmarks, including filtering by tags, and is designed for interoperability with environments like Claude for Desktop. Installation can be done via Smithery or manually, and configuration is managed through environment variables. The project is open source and optimized for secure, tokenized access to Raindrop.io.

    • 63
    • MCP
    • hiromitsusasaki/raindrop-io-mcp-server
  • MCP System Monitor

    MCP System Monitor

    Real-time system metrics for LLMs via Model Context Protocol

    MCP System Monitor exposes real-time system metrics, such as CPU, memory, disk, network, host, and process information, through an interface compatible with the Model Context Protocol (MCP). The tool enables language models to retrieve detailed system data in a standardized way. It supports querying various hardware and OS statistics via structured tools and parameters. Designed with LLM integration in mind, it facilitates context-aware system monitoring for AI-driven applications.

    • 73
    • MCP
    • seekrays/mcp-monitor
  • OpenStreetMap MCP Server

    OpenStreetMap MCP Server

    Enhancing LLMs with geospatial and location-based capabilities via the Model Context Protocol.

    OpenStreetMap MCP Server enables large language models to interact with rich geospatial data and location-based services through a standardized protocol. It provides APIs and tools for address geocoding, reverse geocoding, points of interest search, route directions, and neighborhood analysis. The server exposes location-related resources and tools, making it compatible with MCP hosts for seamless LLM integration.

    • 134
    • MCP
    • jagan-shanmugam/open-streetmap-mcp
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results