mcp-pinecone

mcp-pinecone

A Pinecone-backed Model Context Protocol server for semantic search and document management.

150
Stars
35
Forks
150
Watchers
6
Issues
mcp-pinecone implements a Model Context Protocol (MCP) server that integrates with Pinecone indexes for use with clients such as Claude Desktop. It provides powerful tools for semantic search, document reading, listing, and processing within a Pinecone vector database. The server supports operations like embedding, chunking, and upserting records, enabling contextual management of large document sets. Designed for ease of installation and interoperability via the MCP standard.

Key Features

Read and write operations on Pinecone indexes
Semantic search capabilities over stored records
Document reading and listing tools
Statistics reporting for Pinecone indexes
Chunking and embedding generation for documents
Standardized MCP request and tool handlers
Integration with Claude Desktop as client
Automated installation via Smithery and PyPI
Processing and upserting of document chunks
Support for Pinecone's inference API

Use Cases

Enabling semantic document search within Pinecone-powered apps
Automating document ingestion and indexing with chunking and embedding
Providing context management for AI assistants such as Claude Desktop
Listing and querying documents in a vector database environment
Retrieving and summarizing Pinecone index statistics
Supporting conversational AI systems with context-driven search
Embedding generation and record upsertion for large document sets
Integrating Pinecone vector storage with standardized AI protocols
Developing tools for AI agents to read, write, and process documents
Facilitating programmable AI workflows through MCP endpoints

README

Pinecone Model Context Protocol Server for Claude Desktop.

smithery badge

PyPI - Downloads

Read and write to a Pinecone index.

Components

mermaid
flowchart TB
    subgraph Client["MCP Client (e.g., Claude Desktop)"]
        UI[User Interface]
    end

    subgraph MCPServer["MCP Server (pinecone-mcp)"]
        Server[Server Class]
        
        subgraph Handlers["Request Handlers"]
            ListRes[list_resources]
            ReadRes[read_resource]
            ListTools[list_tools]
            CallTool[call_tool]
            GetPrompt[get_prompt]
            ListPrompts[list_prompts]
        end
        
        subgraph Tools["Implemented Tools"]
            SemSearch[semantic-search]
            ReadDoc[read-document]
            ListDocs[list-documents]
            PineconeStats[pinecone-stats]
            ProcessDoc[process-document]
        end
    end

    subgraph PineconeService["Pinecone Service"]
        PC[Pinecone Client]
        subgraph PineconeFunctions["Pinecone Operations"]
            Search[search_records]
            Upsert[upsert_records]
            Fetch[fetch_records]
            List[list_records]
            Embed[generate_embeddings]
        end
        Index[(Pinecone Index)]
    end

    %% Connections
    UI --> Server
    Server --> Handlers
    
    ListTools --> Tools
    CallTool --> Tools
    
    Tools --> PC
    PC --> PineconeFunctions
    PineconeFunctions --> Index
    
    %% Data flow for semantic search
    SemSearch --> Search
    Search --> Embed
    Embed --> Index
    
    %% Data flow for document operations
    UpsertDoc --> Upsert
    ReadDoc --> Fetch
    ListRes --> List

    classDef primary fill:#2563eb,stroke:#1d4ed8,color:white
    classDef secondary fill:#4b5563,stroke:#374151,color:white
    classDef storage fill:#059669,stroke:#047857,color:white
    
    class Server,PC primary
    class Tools,Handlers secondary
    class Index storage

Resources

The server implements the ability to read and write to a Pinecone index.

Tools

  • semantic-search: Search for records in the Pinecone index.
  • read-document: Read a document from the Pinecone index.
  • list-documents: List all documents in the Pinecone index.
  • pinecone-stats: Get stats about the Pinecone index, including the number of records, dimensions, and namespaces.
  • process-document: Process a document into chunks and upsert them into the Pinecone index. This performs the overall steps of chunking, embedding, and upserting.

Note: embeddings are generated via Pinecone's inference API and chunking is done with a token-based chunker. Written by copying a lot from langchain and debugging with Claude.

Quickstart

Installing via Smithery

To install Pinecone MCP Server for Claude Desktop automatically via Smithery:

bash
npx -y @smithery/cli install mcp-pinecone --client claude

Install the server

Recommend using uv to install the server locally for Claude.

uvx install mcp-pinecone

OR

uv pip install mcp-pinecone

Add your config as described below.

Claude Desktop

On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json On Windows: %APPDATA%/Claude/claude_desktop_config.json

Note: You might need to use the direct path to uv. Use which uv to find the path.

Development/Unpublished Servers Configuration

json
"mcpServers": {
  "mcp-pinecone": {
    "command": "uv",
    "args": [
      "--directory",
      "{project_dir}",
      "run",
      "mcp-pinecone"
    ]
  }
}

Published Servers Configuration

json
"mcpServers": {
  "mcp-pinecone": {
    "command": "uvx",
    "args": [
      "--index-name",
      "{your-index-name}",
      "--api-key",
      "{your-secret-api-key}",
      "mcp-pinecone"
    ]
  }
}

Sign up to Pinecone

You can sign up for a Pinecone account here.

Get an API key

Create a new index in Pinecone, replacing {your-index-name} and get an API key from the Pinecone dashboard, replacing {your-secret-api-key} in the config.

Development

Building and Publishing

To prepare the package for distribution:

  1. Sync dependencies and update lockfile:
bash
uv sync
  1. Build package distributions:
bash
uv build

This will create source and wheel distributions in the dist/ directory.

  1. Publish to PyPI:
bash
uv publish

Note: You'll need to set PyPI credentials via environment variables or command flags:

  • Token: --token or UV_PUBLISH_TOKEN
  • Or username/password: --username/UV_PUBLISH_USERNAME and --password/UV_PUBLISH_PASSWORD

Debugging

Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.

You can launch the MCP Inspector via npm with this command:

bash
npx @modelcontextprotocol/inspector uv --directory {project_dir} run mcp-pinecone

Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Source Code

The source code is available on GitHub.

Contributing

Send your ideas and feedback to me on Bluesky or by opening an issue.

Star History

Star History Chart

Repository Owner

sirmews
sirmews

User

Repository Details

Language Python
Default Branch main
Size 115 KB
Contributors 2
License MIT License
MCP Verified Nov 12, 2025

Programming Languages

Python
93.01%
Makefile
4.38%
Dockerfile
2.61%

Tags

Topics

claude mcp mcp-server model-context-protocol pinecone rag

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Pinecone Assistant MCP Server

    Pinecone Assistant MCP Server

    An MCP server for retrieving information from Pinecone Assistant.

    Pinecone Assistant MCP Server is an implementation of the Model Context Protocol (MCP) for seamless integration with Pinecone Assistant. It enables retrieval of information and supports configurable multiple result fetching. The server can be run via Docker or built from source with Rust and integrates with tools like Claude Desktop.

    • 37
    • MCP
    • pinecone-io/assistant-mcp
  • MCP Local RAG

    MCP Local RAG

    Privacy-first local semantic document search server for MCP clients.

    MCP Local RAG is a privacy-preserving, local document search server designed for use with Model Context Protocol (MCP) clients such as Cursor, Codex, and Claude Code. It enables users to ingest and semantically search local documents without using external APIs or cloud services. All processing, including embedding generation and vector storage, is performed on the user's machine. The tool supports document ingestion, semantic search, file management, file deletion, and system status reporting through MCP.

    • 10
    • MCP
    • shinpr/mcp-local-rag
  • MCP Server for Milvus

    MCP Server for Milvus

    Bridge Milvus vector database with AI apps using Model Context Protocol (MCP).

    MCP Server for Milvus enables seamless integration between the Milvus vector database and large language model (LLM) applications via the Model Context Protocol. It exposes Milvus functionality to external LLM-powered tools through both stdio and Server-Sent Events communication modes. The solution is compatible with MCP-enabled clients such as Claude Desktop and Cursor, supporting easy access to relevant vector data for enhanced AI workflows. Configuration is flexible through environment variables or command-line arguments.

    • 196
    • MCP
    • zilliztech/mcp-server-milvus
  • LlamaCloud MCP Server

    LlamaCloud MCP Server

    Connect multiple LlamaCloud indexes as tools for your MCP client.

    LlamaCloud MCP Server is a TypeScript-based implementation of a Model Context Protocol server that allows users to connect multiple managed indexes from LlamaCloud as separate tools in MCP-compatible clients. Each tool is defined via command-line parameters, enabling flexible and dynamic access to different document indexes. The server automatically generates tool interfaces, each capable of querying its respective LlamaCloud index, with customizable parameters such as index name, description, and result limits. Designed for seamless integration, it works with clients like Claude Desktop, Windsurf, and Cursor.

    • 82
    • MCP
    • run-llama/mcp-server-llamacloud
  • Weaviate MCP Server

    Weaviate MCP Server

    A server implementation for the Model Context Protocol (MCP) built on Weaviate.

    Weaviate MCP Server provides a backend implementation of the Model Context Protocol, enabling interaction with Weaviate for managing, inserting, and querying context objects. The server facilitates object insertion and hybrid search retrieval, supporting context-driven workflows required for LLM orchestration and memory management. It includes tools for building and running a client application, showcasing integration with Weaviate's vector database.

    • 157
    • MCP
    • weaviate/mcp-server-weaviate
  • tavily-search MCP server

    tavily-search MCP server

    A search server that integrates Tavily API with Model Context Protocol tools.

    tavily-search MCP server provides an MCP-compliant server to perform search queries using the Tavily API. It returns search results in text format, including AI responses, URLs, and result titles. The server is designed for easy integration with clients like Claude Desktop or Cursor and supports both local and Docker-based deployment. It facilitates AI workflows by offering search functionality as part of a standardized protocol interface.

    • 44
    • MCP
    • Tomatio13/mcp-server-tavily
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results