MemoryMesh
A knowledge graph server for structured AI memory and context management.
Key Features
Use Cases
README
MemoryMesh
MemoryMesh is a knowledge graph server designed for AI models, with a focus on text-based RPGs and interactive storytelling. It helps AI maintain consistent, structured memory across conversations, enabling richer and more dynamic interactions.
The project is based on the Knowledge Graph Memory Server from the MCP servers repository and retains its core functionality.
IMPORTANT
Since v0.2.7 the default location of schemas was changed to dist/data/schemas.
This location is not expected to change in the future, but if you are updating from a previous version, make sure to move your schema files to the new location.
Quick Links
Overview
MemoryMesh is a local knowledge graph server that empowers you to build and manage structured information for AI models. While particularly well-suited for text-based RPGs, its adaptable design makes it useful for various applications, including social network simulations, organizational planning, or any scenario involving structured data.
Key Features
- Dynamic Schema-Based Tools: Define your data structure with schemas, and MemoryMesh automatically generates tools for adding, updating, and deleting data.
- Intuitive Schema Design: Create schemas that guide the AI in generating and connecting nodes, using required fields, enumerated types, and relationship definitions.
- Metadata for AI Guidance: Use metadata to provide context and structure, helping the AI understand the meaning and relationships within your data.
- Relationship Handling: Define relationships within your schemas to encourage the AI to create connections (edges) between related data points (nodes).
- Informative Feedback: Provides error feedback to the AI, enabling it to learn from mistakes and improve its interactions with the knowledge graph.
- Event Support: An event system tracks operations, providing insights into how the knowledge graph is being modified.
Nodes
Nodes represent entities or concepts within the knowledge graph. Each node has:
name: A unique identifier.nodeType: The type of the node (e.g.,npc,artifact,location), defined by your schemas.metadata: An array of strings providing descriptive details about the node.weight: (Optional) A numerical value between 0 and 1 representing the strength of the relationship, defaulting to 1.
Example Node:
{
"name": "Aragorn",
"nodeType": "player_character",
"metadata": [
"Race: Human",
"Class: Ranger",
"Skills: Tracking, Swordsmanship",
"Affiliation: Fellowship of the Ring"
]
}
Edges
Edges represent relationships between nodes. Each edge has:
from: The name of the source node.to: The name of the target node.edgeType: The type of relationship (e.g.,owns,located_in).
{
"from": "Aragorn",
"to": "Andúril",
"edgeType": "owns"
}
Schemas
Schemas are the heart of MemoryMesh. They define the structure of your data and drive the automatic generation of tools.
Schema File Location
Place your schema files (.schema.json) in the dist/data/schemas directory of your built MemoryMesh project. MemoryMesh will automatically detect and process these files on startup.
Schema Structure
File name: [name].schema.json. For example, for a schema defining an 'npc', the filename would be add_npc.schema.json.
name- Identifier for the schema and node type within the memory. IMPORTANT: The schema’s name must start withadd_to be recognized.description- Used as the description for theadd_<name>tool, providing context for the AI. (Thedeleteandupdatetools have a generic description)properties- Each property includes its type, description, and additional constraints.propertytype- Supported values arestringorarray.description- Helps guide the AI on the entity’s purpose.required- Boolean. Iftrue, the AI is forced to provide this property when creating a node.enum- An array of strings. If present, the AI must choose one of the given options.relationship- Defines a connection to another node. If a property is required and has a relationship, the AI will always create both the node and the corresponding edge.edgeType- Type of the relationship to be created.description- Helps guide the AI on the relationship’s purpose.
additionalProperties- Boolean. Iftrue, allows the AI to add extra attributes beyond those defined as required or optional.
Example Schema (add_npc.schema.json):
{
"name": "add_npc",
"description": "Schema for adding an NPC to the memory" ,
"properties": {
"name": {
"type": "string",
"description": "A unique identifier for the NPC",
"required": true
},
"race": {
"type": "string",
"description": "The species or race of the NPC",
"required": true,
"enum": [
"Human",
"Elf",
"Dwarf",
"Orc",
"Goblin"
]
},
"currentLocation": {
"type": "string",
"description": "The current location of the NPC",
"required": true,
"relationship": {
"edgeType": "located_in",
"description": "The current location of the NPC"
}
}
},
"additionalProperties": true
}
Based on this schema, MemoryMesh automatically creates:
- add_npc: To add new NPC nodes.
- update_npc: To modify existing NPC nodes.
- delete_npc: To remove NPC nodes.
MemoryMesh includes 11 pre-built schemas designed for text-based RPGs, providing a ready-to-use foundation for game development.
SchemaManager Tool
MemoryMesh includes a SchemaManager tool to simplify schema creation and editing. It provides a visual interface, making it easy to define your data structures without writing JSON directly.
Dynamic Tools
MemoryMesh simplifies interaction with your knowledge graph through dynamic tools. These tools are not manually coded but are automatically generated directly from your schema definitions. This means that when you define the structure of your data using schemas, MemoryMesh intelligently creates a set of tools tailored to work with that specific data structure.
Think of it like this: You provide a blueprint (the schema), and MemoryMesh automatically constructs the necessary tools to build, modify, and remove elements based on that blueprint.
How does it work behind the scenes?
MemoryMesh has an intelligent system that reads your schema definitions. It analyzes the structure you've defined, including the properties of your entities and their relationships. Based on this analysis, it automatically creates a set of tools for each entity type:
add_<entity>: A tool for creating new instances of an entity.update_<entity>: A tool for modifying existing entities.delete_<entity>: A tool for removing entities.
These tools are then made available through a central hub within MemoryMesh, ensuring they can be easily accessed and used by any connected client or AI.
In essence, MemoryMesh's dynamic tool system provides a powerful and efficient way to manage your knowledge graph, freeing you to focus on the content and logic of your application rather than the underlying mechanics of data manipulation.
Memory file
By default, data is stored in a JSON file in dist/data/memory.json.
Memory Viewer
The Memory Viewer is a separate tool designed to help you visualize and inspect the contents of the knowledge graph managed by MemoryMesh. It provides a user-friendly interface for exploring nodes, edges, and their properties.
Key Features:
- Graph Visualization: View the knowledge graph as an interactive node-link diagram.
- Node Inspection: Select nodes to see their nodeType, metadata, and connected edges.
- Edge Exploration: Examine relationships between nodes, including edgeType and direction.
- Search and Filtering: Quickly find specific nodes or filter them by type.
- Table View: Allows you to easily find and inspect specific nodes and edges, or all of them at once.
- Raw JSON View: Allows you to view the raw JSON data from the memory file.
- Stats Panel: Provides key metrics and information about the knowledge graph: total nodes, total edges, node types, and edge types.
- Search and Filter: Allows you to filter by node type or edge type and filter whether to show nodes, edges, or both.
Accessing the Memory Viewer
The Memory Viewer is a standalone web application. Memory Viewer discussion
Using the Memory Viewer
- Select Memory File: In the Memory Viewer, click the "Select Memory File" button.
- Choose File: Navigate to your MemoryMesh project directory and select the
memory.jsonfile (located indist/data/memory.jsonby default). - Explore: The Memory Viewer will load and display the contents of your knowledge graph.
Memory Flow
Prompt
For optimal results, use Claude's "Projects" feature with custom instructions. Here's an example of a prompt you can start with:
You are a helpful AI assistant managing a knowledge graph for a text-based RPG. You have access to the following tools: add_npc, update_npc, delete_npc, add_location, update_location, delete_location, and other tools for managing the game world.
When the user provides input, first process it using your available tools to update the knowledge graph. Then, respond in a way that is appropriate for a text-based RPG.
You can also instruct the AI to perform specific actions directly in the chat.
Experiment with different prompts to find what works best for your use case!
Example
- A simple example with custom instructions.
- An example for the sake of example, with visualization (NOT part of the functionality)
Add a couple of cities, some npcs, couple locations around the city to explore, hide an artifact or two somewhere
Installation
Installing via Smithery
To install MemoryMesh for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install memorymesh --client claude
Prerequisites
- Node.js: Version 18 or higher. You can download it from nodejs.org.
- npm: Usually included with Node.js.
- Claude for Desktop: Make sure you have the latest version installed from claude.ai/download.
Installation Steps
-
Clone the Repository:
bashgit clone https://github.com/CheMiguel23/memorymesh.git cd memorymesh -
Install Dependencies:
bashnpm install -
Build the Project:
bashnpm run buildThis command compiles the TypeScript code into JavaScript in the
distdirectory and copies sample schema and data files into it as well. -
Verify File Copy (Optional):
- The build process should automatically copy the
datafolder todist. - Check that
dist/dataexists and contains.jsonfiles. Also verify thatdist/data/schemasexists and contains.schema.jsonfiles.
- The build process should automatically copy the
-
Configure Claude Desktop:
Open your Claude Desktop configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Add an entry for
memorymeshto themcpServerssection. You can choose one of the following configuration options:
json"mcpServers": { "memorymesh": { "command": "node", "args": ["/ABSOLUTE/PATH/TO/YOUR/PROJECT/memorymesh/dist/index.js"] } }- Replace
/ABSOLUTE/PATH/TO/YOUR/PROJECT/with the actual absolute path to yourmemorymeshproject directory. - Example (macOS):
json
"command": "node", "args": ["/Users/yourusername/Projects/memorymesh/dist/index.js"] - Example (Windows):
json
"command": "node", "args": ["C:\\Projects\\memorymesh\\dist\\index.js"]
- macOS:
-
Restart Claude Desktop: Completely restart Claude Desktop for the changes to take effect.
Verify Installation
- Start Claude Desktop.
- Open a new chat.
- Look for the MCP plugin icon in the top-right corner. If it's there, your configuration is likely correct.
- Click the icon. You should see "memorymesh" in the list of connected servers.
- Click the icon. If you see tools listed (e.g.,
add_npc,update_npc, etc.), your server is working and exposing tools correctly.
Updating
Before updates, make sure to back up your dist/data directory to avoid losing your memory data.
Troubleshooting
-
Server not appearing in Claude:
- Double-check the paths in your
claude_desktop_config.json. Make sure they are absolute paths and correct. - Verify that the
distdirectory exists and contains the compiled JavaScript files, includingindex.js. - Check the Claude Desktop logs for errors:
- macOS:
~/Library/Logs/Claude/mcp-server-memorymesh.log(andmcp.log) - Windows: (Likely in a
Logsfolder under%AppData%\Claude)
- macOS:
- Double-check the paths in your
-
Tools not showing up:
- Make sure your
npm run buildcommand completed without errors. - Verify that your schema files are correctly placed in
dist/data/schemasand follow the correct naming convention (add_[entity].schema.json). - Check your server's console output or logs for any errors during initialization.
- Make sure your
Advanced Configuration
MemoryMesh offers several ways to customize its behavior beyond the basic setup:
Variables
You can override default settings using in /config/config.ts
- MEMORY_FILE: Specifies the path to the JSON file used for storing the knowledge graph data. (Default:
dist/data/memory.json) - SCHEMAS_DIR: Path to schema files directory. (Default:
dist/data/schemas/memory.json)
Limitations
- Node Deletion: The AI may be hesitant to delete nodes from the knowledge graph. Encourage it through prompts if needed.
Contribution
Contributions, feedback, and ideas are welcome! This project is a personal exploration into integrating structured data with AI reasoning capabilities. Contributions, feedback, and ideas are welcome to push it further or inspire new projects.
Star History
Repository Owner
User
Repository Details
Programming Languages
Tags
Topics
Join Our Newsletter
Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.
Related MCPs
Discover similar Model Context Protocol servers
Memory MCP
A Model Context Protocol server for managing LLM conversation memories with intelligent context window caching.
Memory MCP provides a Model Context Protocol (MCP) server for logging, retrieving, and managing memories from large language model (LLM) conversations. It offers features such as context window caching, relevance scoring, and tag-based context retrieval, leveraging MongoDB for persistent storage. The system is designed to efficiently archive, score, and summarize conversational context, supporting external orchestration and advanced memory management tools. This enables seamless handling of conversation history and dynamic context for enhanced LLM applications.
- ⭐ 10
- MCP
- JamesANZ/memory-mcp
Zettelkasten MCP Server
A Zettelkasten-based knowledge management system implementing the Model Context Protocol.
Zettelkasten MCP Server provides an implementation of the Zettelkasten note-taking methodology, enriched with bidirectional linking, semantic relationships, and categorization of notes. It enables creation, exploration, and synthesis of atomic knowledge using MCP for AI-assisted workflows. The system integrates with clients such as Claude and supports markdown, advanced search, and a structured prompt framework for large language models. The dual storage architecture and synchronous operation model ensure flexibility and reliability for managing personal or collaborative knowledge bases.
- ⭐ 114
- MCP
- entanglr/zettelkasten-mcp
@growi/mcp-server
Bridge GROWI wiki content to AI models with context-aware access and management.
@growi/mcp-server acts as a Model Context Protocol (MCP) server that enables AI models to access, search, and manage GROWI wiki content within an organization. It facilitates seamless connection between multiple GROWI instances and language models, enhancing information retrieval and knowledge management capabilities. The platform provides comprehensive tools for page, tag, comment, and revision management as well as share link and user activity tracking. Its flexible configuration allows simultaneous operation with several GROWI apps for scalable deployment.
- ⭐ 10
- MCP
- growilabs/growi-mcp-server
Graphlit MCP Server
Integrate and unify knowledge sources for RAG-ready AI context with the Graphlit MCP Server.
Graphlit MCP Server provides a Model Context Protocol interface, enabling seamless integration between MCP clients and the Graphlit platform. It supports ingestion from a wide array of sources such as Slack, Discord, Google Drive, email, Jira, and GitHub, turning them into a searchable, RAG-ready knowledge base. Built-in tools allow for document, media extraction, web crawling, and web search, as well as advanced retrieval and publishing functionalities. The server facilitates easy configuration, sophisticated data operations, and automated notifications for diverse workflows.
- ⭐ 369
- MCP
- graphlit/graphlit-mcp-server
Membase-MCP Server
Decentralized memory layer server for AI agents using the Model Context Protocol.
Membase-MCP Server provides decentralized and persistent storage of conversation history and agent knowledge for AI agents using Unibase and the Model Context Protocol. It supports secure, traceable storage and retrieval of messages to ensure agent continuity and personalization within interactions. The server offers integration with Claude, Windsurf, Cursor, and Cline, allowing dynamic context management such as switching conversations and saving or retrieving messages. The server leverages the Unibase DA network for verifiable storage and agent data interoperability.
- ⭐ 15
- MCP
- unibaseio/membase-mcp
Agentic Long-Term Memory with Notion Integration
Production-ready agentic long-term memory and Notion integration with Model Context Protocol support.
Agentic Long-Term Memory with Notion Integration enables AI agents to incorporate advanced long-term memory capabilities using both vector and graph databases. It offers comprehensive Notion workspace integration along with a production-ready Model Context Protocol (MCP) server supporting HTTP and stdio transports. The tool facilitates context management, tool discovery, and advanced function chaining for complex agentic workflows.
- ⭐ 4
- MCP
- ankitmalik84/Agentic_Longterm_Memory
Didn't find tool you were looking for?