Chroma MCP Server

Chroma MCP Server

A self-hosted Model Context Protocol (MCP) server for Chroma vector database integration.

418
Stars
78
Forks
418
Watchers
14
Issues
Chroma MCP Server implements the Model Context Protocol to allow seamless integration between LLM applications and external data using the Chroma embedding database. It enables AI models to create, manage, and query collections with advanced vector search, full text search, and metadata filtering. The server supports both ephemeral and persistent client types, along with integration for HTTP and cloud-based Chroma instances. Multiple embedding functions, collection management tools, and rich document operations are available for extensible LLM workflows.

Key Features

Implements the Model Context Protocol (MCP) specification
Seamless integration with Chroma embedding database
Supports multiple embedding functions including OpenAI, Cohere, Jina, VoyageAI, and Roboflow
Collection creation, modification, deletion, and statistics retrieval
Advanced document operations: add, query, update, and delete
Semantic vector search and full text search capabilities
Metadata-based filtering and document retrieval
Ephemeral, persistent, HTTP, and cloud client support
Pagination and advanced filtering on collections and documents
Configurable HNSW parameters for optimized searches

Use Cases

Augmenting LLMs with external knowledge retrieval using Chroma
Self-hosted or cloud-deployed context management for AI applications
Building LLM-driven apps requiring dynamic memory and recall
Implementing collection-based semantic search in NLP workflows
Managing and indexing documents with custom metadata and embeddings
Rapid prototyping and development using in-memory or persistent storage
Integrating with varied embedding providers for flexible context access
Creating personalized question-answering systems using vector search
Fine-grained access and modification of contextual data for LLMs
Supporting hybrid search scenarios: combining metadata, vector, and text queries

README

Chroma MCP Server

smithery badge

The Model Context Protocol (MCP) is an open protocol designed for effortless integration between LLM applications and external data sources or tools, offering a standardized framework to seamlessly provide LLMs with the context they require.

This server provides data retrieval capabilities powered by Chroma, enabling AI models to create collections over generated data and user inputs, and retrieve that data using vector search, full text search, metadata filtering, and more.

This is a MCP server for self-hosting your access to Chroma. If you are looking for Package Search you can find the repository for that here.

Features

  • Flexible Client Types

    • Ephemeral (in-memory) for testing and development
    • Persistent for file-based storage
    • HTTP client for self-hosted Chroma instances
    • Cloud client for Chroma Cloud integration (automatically connects to api.trychroma.com)
  • Collection Management

    • Create, modify, and delete collections
    • List all collections with pagination support
    • Get collection information and statistics
    • Configure HNSW parameters for optimized vector search
    • Select embedding functions when creating collections
  • Document Operations

    • Add documents with optional metadata and custom IDs
    • Query documents using semantic search
    • Advanced filtering using metadata and document content
    • Retrieve documents by IDs or filters
    • Full text search capabilities

Supported Tools

  • chroma_list_collections - List all collections with pagination support
  • chroma_create_collection - Create a new collection with optional HNSW configuration
  • chroma_peek_collection - View a sample of documents in a collection
  • chroma_get_collection_info - Get detailed information about a collection
  • chroma_get_collection_count - Get the number of documents in a collection
  • chroma_modify_collection - Update a collection's name or metadata
  • chroma_delete_collection - Delete a collection
  • chroma_add_documents - Add documents with optional metadata and custom IDs
  • chroma_query_documents - Query documents using semantic search with advanced filtering
  • chroma_get_documents - Retrieve documents by IDs or filters with pagination
  • chroma_update_documents - Update existing documents' content, metadata, or embeddings
  • chroma_delete_documents - Delete specific documents from a collection

Embedding Functions

Chroma MCP supports several embedding functions: default, cohere, openai, jina, voyageai, and roboflow.

The embedding functions utilize Chroma's collection configuration, which persists the selected embedding function of a collection for retrieval. Once a collection is created using the collection configuration, on retrieval for future queries and inserts, the same embedding function will be used, without needing to specify the embedding function again. Embedding function persistance was added in v1.0.0 of Chroma, so if you created a collection using version <=0.6.3, this feature is not supported.

When accessing embedding functions that utilize external APIs, please be sure to add the environment variable for the API key with the correct format, found in Embedding Function Environment Variables

Usage with Claude Desktop

  1. To add an ephemeral client, add the following to your claude_desktop_config.json file:
json
"chroma": {
    "command": "uvx",
    "args": [
        "chroma-mcp"
    ]
}
  1. To add a persistent client, add the following to your claude_desktop_config.json file:
json
"chroma": {
    "command": "uvx",
    "args": [
        "chroma-mcp",
        "--client-type",
        "persistent",
        "--data-dir",
        "/full/path/to/your/data/directory"
    ]
}

This will create a persistent client that will use the data directory specified.

  1. To connect to Chroma Cloud, add the following to your claude_desktop_config.json file:
json
"chroma": {
    "command": "uvx",
    "args": [
        "chroma-mcp",
        "--client-type",
        "cloud",
        "--tenant",
        "your-tenant-id",
        "--database",
        "your-database-name",
        "--api-key",
        "your-api-key"
    ]
}

This will create a cloud client that automatically connects to api.trychroma.com using SSL.

Note: Adding API keys in arguments is fine on local devices, but for safety, you can also specify a custom path for your environment configuration file using the --dotenv-path argument within the args list, for example: "args": ["chroma-mcp", "--dotenv-path", "/custom/path/.env"].

  1. To connect to a [self-hosted Chroma instance on your own cloud provider](https://docs.trychroma.com/ production/deployment), add the following to your claude_desktop_config.json file:
json
"chroma": {
    "command": "uvx",
    "args": [
      "chroma-mcp", 
      "--client-type", 
      "http", 
      "--host", 
      "your-host", 
      "--port", 
      "your-port", 
      "--custom-auth-credentials",
      "your-custom-auth-credentials",
      "--ssl",
      "true"
    ]
}

This will create an HTTP client that connects to your self-hosted Chroma instance.

Demos

Find reference usages, such as shared knowledge bases & adding memory to context windows in the Chroma MCP Docs

Using Environment Variables

You can also use environment variables to configure the client. The server will automatically load variables from a .env file located at the path specified by --dotenv-path (defaults to .chroma_env in the working directory) or from system environment variables. Command-line arguments take precedence over environment variables.

bash
# Common variables
export CHROMA_CLIENT_TYPE="http"  # or "cloud", "persistent", "ephemeral"

# For persistent client
export CHROMA_DATA_DIR="/full/path/to/your/data/directory"

# For cloud client (Chroma Cloud)
export CHROMA_TENANT="your-tenant-id"
export CHROMA_DATABASE="your-database-name"
export CHROMA_API_KEY="your-api-key"

# For HTTP client (self-hosted)
export CHROMA_HOST="your-host"
export CHROMA_PORT="your-port"
export CHROMA_CUSTOM_AUTH_CREDENTIALS="your-custom-auth-credentials"
export CHROMA_SSL="true"

# Optional: Specify path to .env file (defaults to .chroma_env)
export CHROMA_DOTENV_PATH="/path/to/your/.env" 

Embedding Function Environment Variables

When using external embedding functions that access an API key, follow the naming convention CHROMA_<>_API_KEY="<key>". So to set a Cohere API key, set the environment variable CHROMA_COHERE_API_KEY="". We recommend adding this to a .env file somewhere and using the CHROMA_DOTENV_PATH environment variable or --dotenv-path flag to set that location for safekeeping.

Star History

Star History Chart

Repository Owner

chroma-core
chroma-core

Organization

Repository Details

Language Python
Default Branch main
Size 276 KB
Contributors 6
License Apache License 2.0
MCP Verified Nov 12, 2025

Programming Languages

Python
99.23%
Dockerfile
0.77%

Tags

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Weaviate MCP Server

    Weaviate MCP Server

    A server implementation for the Model Context Protocol (MCP) built on Weaviate.

    Weaviate MCP Server provides a backend implementation of the Model Context Protocol, enabling interaction with Weaviate for managing, inserting, and querying context objects. The server facilitates object insertion and hybrid search retrieval, supporting context-driven workflows required for LLM orchestration and memory management. It includes tools for building and running a client application, showcasing integration with Weaviate's vector database.

    • 157
    • MCP
    • weaviate/mcp-server-weaviate
  • mcp-server-qdrant

    mcp-server-qdrant

    Official Model Context Protocol server for seamless integration with Qdrant vector search engine.

    mcp-server-qdrant provides an official implementation of the Model Context Protocol for interfacing with the Qdrant vector search engine. It enables storing and retrieving contextual information, acting as a semantic memory layer for LLM-driven applications. Designed for easy integration, it supports environment-based configuration and extensibility via FastMCP. The server standardizes tool interfaces for managing and querying contextual data using Qdrant.

    • 1,054
    • MCP
    • qdrant/mcp-server-qdrant
  • VikingDB MCP Server

    VikingDB MCP Server

    MCP server for managing and searching VikingDB vector databases.

    VikingDB MCP Server is an implementation of the Model Context Protocol (MCP) that acts as a bridge between VikingDB, a high-performance vector database by ByteDance, and AI model context management frameworks. It allows users to store, upsert, and search vectorized information efficiently using standardized MCP commands. The server supports various operations on VikingDB collections and indexes, making it suitable for integrating advanced vector search in AI workflows.

    • 3
    • MCP
    • KashiwaByte/vikingdb-mcp-server
  • MCP Server for Milvus

    MCP Server for Milvus

    Bridge Milvus vector database with AI apps using Model Context Protocol (MCP).

    MCP Server for Milvus enables seamless integration between the Milvus vector database and large language model (LLM) applications via the Model Context Protocol. It exposes Milvus functionality to external LLM-powered tools through both stdio and Server-Sent Events communication modes. The solution is compatible with MCP-enabled clients such as Claude Desktop and Cursor, supporting easy access to relevant vector data for enhanced AI workflows. Configuration is flexible through environment variables or command-line arguments.

    • 196
    • MCP
    • zilliztech/mcp-server-milvus
  • mcp-pinecone

    mcp-pinecone

    A Pinecone-backed Model Context Protocol server for semantic search and document management.

    mcp-pinecone implements a Model Context Protocol (MCP) server that integrates with Pinecone indexes for use with clients such as Claude Desktop. It provides powerful tools for semantic search, document reading, listing, and processing within a Pinecone vector database. The server supports operations like embedding, chunking, and upserting records, enabling contextual management of large document sets. Designed for ease of installation and interoperability via the MCP standard.

    • 150
    • MCP
    • sirmews/mcp-pinecone
  • MCP Local RAG

    MCP Local RAG

    Privacy-first local semantic document search server for MCP clients.

    MCP Local RAG is a privacy-preserving, local document search server designed for use with Model Context Protocol (MCP) clients such as Cursor, Codex, and Claude Code. It enables users to ingest and semantically search local documents without using external APIs or cloud services. All processing, including embedding generation and vector storage, is performed on the user's machine. The tool supports document ingestion, semantic search, file management, file deletion, and system status reporting through MCP.

    • 10
    • MCP
    • shinpr/mcp-local-rag
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results