pluggedin-mcp-proxy

pluggedin-mcp-proxy

Unified proxy server for Model Context Protocol data exchanges and AI integrations

87
Stars
15
Forks
87
Watchers
0
Issues
Aggregates multiple Model Context Protocol (MCP) servers into a single, unified proxy interface, supporting real-time discovery, management, and orchestration of AI model resources, tools, and prompts. Enables seamless interaction between MCP clients such as Claude, Cline, and Cursor, while integrating advanced document search, AI document exchange, and workspace management. Provides flexible transport modes (STDIO and Streamable HTTP), robust authentication, and comprehensive security measures for safe and scalable AI data exchange.

Key Features

Aggregates and routes requests across multiple MCP servers
Supports both STDIO and Streamable HTTP transport modes
Unified search and retrieval of tools, resources, and prompts
Real-time notification system with optional email delivery
AI document creation, search, and versioning with model attribution
Multi-workspace configuration and quick switching
Flexible authentication including OAuth and API key support
Comprehensive security features (SSRF/blocklists/sanitization/rate limiting)
API-driven configuration and proxy operation
Dockerized deployment with optimized container builds

Use Cases

Unifying access to multiple AI model servers for developers or teams
Centralized orchestration and management of tools and resources in AI workflows
Providing a single discovery interface for external clients like Claude Desktop, Cline, or Cursor
Enabling advanced, RAG-enhanced semantic search across AI-generated and uploaded documents
Real-time notification relay and logging of AI activities
Seamless integration of custom tools and datasets with AI assistants
Streamlined handling of authentication and security for AI proxy environments
Multi-workspace support for different organizational or project requirements
Automated AI document creation and management with attribution tracking
Running secure, scalable AI infrastructure in Dockerized, web-based, or local settings

README

plugged.in MCP Proxy Server

Version GitHub Stars License TypeScript MCP

๐Ÿ“‹ Overview

The plugged.in MCP Proxy Server is a powerful middleware that aggregates multiple Model Context Protocol (MCP) servers into a single unified interface. It fetches tool, prompt, and resource configurations from the plugged.in App and intelligently routes requests to the appropriate underlying MCP servers.

This proxy enables seamless integration with any MCP client (Claude, Cline, Cursor, etc.) while providing advanced management capabilities through the plugged.in ecosystem.

โญ If you find this project useful, please consider giving it a star on GitHub! It helps us reach more developers and motivates us to keep improving.

โœจ Key Features

๐Ÿš€ Core Capabilities

  • Built-in AI Playground: Test your MCPs instantly with Claude, Gemini, OpenAI, and xAI without any client setup
  • Universal MCP Compatibility: Works with any MCP client including Claude Desktop, Cline, and Cursor
  • Multi-Server Support: Connect to STDIO, SSE, and Streamable HTTP MCP servers
  • Dual Transport Modes: Run proxy as STDIO (default) or Streamable HTTP server
  • Unified Document Search: Search across all connected servers with built-in RAG capabilities
  • AI Document Exchange (RAG v2): MCP servers can create and manage documents in your library with full attribution
  • Notifications from Any Model: Receive real-time notifications with optional email delivery
  • Multi-Workspace Layer: Switch between different sets of MCP configurations with one click
  • API-Driven Proxy: Fetches capabilities from plugged.in App APIs rather than direct discovery
  • Full MCP Support: Handles tools, resources, resource templates, and prompts
  • Custom Instructions: Supports server-specific instructions formatted as MCP prompts

๐ŸŽฏ New in v1.5.0 (RAG v2 - AI Document Exchange)

  • AI Document Creation: MCP servers can now create documents directly in your library
    • Full model attribution tracking (which AI created/updated the document)
    • Version history with change tracking
    • Content deduplication via SHA-256 hashing
    • Support for multiple formats: MD, TXT, JSON, HTML, PDF, and more
  • Advanced Document Search: Enhanced RAG queries with AI filtering
    • Filter by AI model, provider, date range, tags, and source type
    • Semantic search with relevance scoring
    • Automatic snippet generation with keyword highlighting
    • Support for filtering: ai_generated, upload, or api sources
  • Document Management via MCP:
    • Set document visibility: private, workspace, or public
    • Parent-child relationships for document versions
    • Profile-based organization alongside project-based scoping
    • Real-time progress tracking for document processing

๐ŸŽฏ Features from v1.4.0 (Registry v2 Support)

  • OAuth Token Management: Seamless OAuth authentication handling for Streamable HTTP MCP servers
    • Automatic token retrieval from plugged.in App
    • Secure token storage and refresh mechanisms
    • No client-side authentication needed
  • Enhanced Notification System: Bidirectional notification support
    • Send notifications to plugged.in App
    • Receive notifications from MCP servers
    • Mark notifications as read/unread
    • Delete notifications programmatically
  • Trending Analytics: Real-time activity tracking
    • Every tool call is logged and tracked
    • Contributes to trending server calculations
    • Usage metrics and popularity insights
  • Registry Integration: Full support for Registry v2 features
    • Automatic server discovery from registry
    • Installation tracking and metrics
    • Community server support

๐Ÿ“ฆ Features from v1.1.0

  • Streamable HTTP Support: Full support for downstream MCP servers using Streamable HTTP transport
  • HTTP Server Mode: Run the proxy as an HTTP server with configurable ports
  • Flexible Authentication: Optional Bearer token authentication for HTTP endpoints
  • Session Management: Choose between stateful (session-based) or stateless operation modes

๐ŸŽฏ Core Features from v1.0.0

  • Real-Time Notifications: Track all MCP activities with comprehensive notification support
  • RAG Integration: Support for document-enhanced queries through the plugged.in App
  • Inspector Scripts: Automated testing tools for debugging and development
  • Health Monitoring: Built-in ping endpoint for connection monitoring

๐Ÿ”ง Tool Categories

The proxy provides two distinct categories of tools:

๐Ÿ”ง Static Built-in Tools (Always Available)

These tools are built into the proxy and work without any server configuration:

  • pluggedin_discover_tools - Smart discovery with caching for instant results
  • pluggedin_rag_query - RAG v2 search across your documents with AI filtering capabilities
  • pluggedin_send_notification - Send notifications with optional email delivery
  • pluggedin_create_document - (Coming Soon) Create AI-generated documents in your library

โšก Dynamic MCP Tools (From Connected Servers)

These tools come from your configured MCP servers and can be turned on/off:

  • Database tools (PostgreSQL, SQLite, etc.)
  • File system tools
  • API integration tools
  • Custom tools from any MCP server

The discovery tool intelligently shows both categories, giving AI models immediate access to all available capabilities.

๐Ÿš€ Discovery Tool Usage

bash
# Quick discovery - returns cached data instantly
pluggedin_discover_tools()

# Force refresh - shows current tools + runs background discovery  
pluggedin_discover_tools({"force_refresh": true})

# Discover specific server
pluggedin_discover_tools({"server_uuid": "uuid-here"})

Example Response:

## ๐Ÿ”ง Static Built-in Tools (Always Available):
1. **pluggedin_discover_tools** - Smart discovery with caching
2. **pluggedin_rag_query** - RAG v2 search across documents with AI filtering  
3. **pluggedin_send_notification** - Send notifications
4. **pluggedin_create_document** - (Coming Soon) Create AI-generated documents

## โšก Dynamic MCP Tools (8) - From Connected Servers:
1. **query** - Run read-only SQL queries
2. **generate_random_integer** - Generate secure random integers
...

๐Ÿ“š RAG v2 Usage Examples

The enhanced RAG v2 system allows MCP servers to create and search documents with full AI attribution:

bash
# Search for documents created by specific AI models
pluggedin_rag_query({
  "query": "system architecture",
  "filters": {
    "modelName": "Claude 3 Opus",
    "source": "ai_generated",
    "tags": ["technical"]
  }
})

# Search across all document sources
pluggedin_rag_query({
  "query": "deployment guide",
  "filters": {
    "dateFrom": "2024-01-01",
    "visibility": "workspace"
  }
})

# Future: Create AI-generated documents (Coming Soon)
pluggedin_create_document({
  "title": "Analysis Report",
  "content": "# Market Analysis\n\nDetailed findings...",
  "format": "md",
  "tags": ["analysis", "market"],
  "metadata": {
    "model": {
      "name": "Claude 3 Opus",
      "provider": "Anthropic"
    }
  }
})

๐Ÿš€ Quick Start

Prerequisites

  • Node.js 18+ (recommended v20+)
  • An API key from the plugged.in App (get one at plugged.in/api-keys)

Installation

bash
# Install and run with npx (latest v1.0.0)
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY

๐Ÿ”„ Upgrading to v1.0.0

For existing installations, see our Migration Guide for detailed upgrade instructions.

bash
# Quick upgrade
npx -y @pluggedin/pluggedin-mcp-proxy@1.0.0 --pluggedin-api-key YOUR_API_KEY

Configuration for MCP Clients

Claude Desktop

Add the following to your Claude Desktop configuration:

json
{
  "mcpServers": {
    "pluggedin": {
      "command": "npx",
      "args": ["-y", "@pluggedin/pluggedin-mcp-proxy@latest"],
      "env": {
        "PLUGGEDIN_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Cline

Add the following to your Cline configuration:

json
{
  "mcpServers": {
    "pluggedin": {
      "command": "npx",
      "args": ["-y", "@pluggedin/pluggedin-mcp-proxy@latest"],
      "env": {
        "PLUGGEDIN_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Cursor

For Cursor, you can use command-line arguments instead of environment variables:

bash
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY

โš™๏ธ Configuration Options

Environment Variables

Variable Description Required Default
PLUGGEDIN_API_KEY API key from plugged.in App Yes -
PLUGGEDIN_API_BASE_URL Base URL for plugged.in App No https://plugged.in

Command Line Arguments

Command line arguments take precedence over environment variables:

bash
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY --pluggedin-api-base-url https://your-custom-url.com

Transport Options

Option Description Default
--transport <type> Transport type: stdio or streamable-http stdio
--port <number> Port for Streamable HTTP server 12006
--stateless Enable stateless mode for Streamable HTTP false
--require-api-auth Require API key for Streamable HTTP requests false

For a complete list of options:

bash
npx -y @pluggedin/pluggedin-mcp-proxy@latest --help

๐ŸŒ Streamable HTTP Mode

The proxy can run as an HTTP server instead of STDIO, enabling web-based access and remote connections.

Basic Usage

bash
# Run as HTTP server on default port (12006)
npx -y @pluggedin/pluggedin-mcp-proxy@latest --transport streamable-http --pluggedin-api-key YOUR_API_KEY

# Custom port
npx -y @pluggedin/pluggedin-mcp-proxy@latest --transport streamable-http --port 8080 --pluggedin-api-key YOUR_API_KEY

# With authentication required
npx -y @pluggedin/pluggedin-mcp-proxy@latest --transport streamable-http --require-api-auth --pluggedin-api-key YOUR_API_KEY

# Stateless mode (new session per request)
npx -y @pluggedin/pluggedin-mcp-proxy@latest --transport streamable-http --stateless --pluggedin-api-key YOUR_API_KEY

HTTP Endpoints

  • POST /mcp - Send MCP messages
  • GET /mcp - Server-sent events stream (optional)
  • DELETE /mcp - Terminate session
  • GET /health - Health check endpoint

Session Management

In stateful mode (default), use the mcp-session-id header to maintain sessions:

bash
# First request creates a session
curl -X POST http://localhost:12006/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -d '{"jsonrpc":"2.0","method":"tools/list","id":1}'

# Subsequent requests use the same session
curl -X POST http://localhost:12006/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -H "mcp-session-id: YOUR_SESSION_ID" \
  -d '{"jsonrpc":"2.0","method":"tools/call","params":{"name":"tool_name"},"id":2}'

Authentication

When using --require-api-auth, include your API key as a Bearer token:

bash
curl -X POST http://localhost:12006/mcp \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -d '{"jsonrpc":"2.0","method":"ping","id":1}'

๐Ÿณ Docker Usage

You can also build and run the proxy server using Docker.

Building the Image

Ensure you have Docker installed and running. Navigate to the pluggedin-mcp directory and run:

bash
docker build -t pluggedin-mcp-proxy:latest .

A .dockerignore file is included to optimize the build context.

Running the Container

STDIO Mode (Default)

Run the container in STDIO mode for MCP Inspector testing:

bash
docker run -it --rm \
  -e PLUGGEDIN_API_KEY="YOUR_API_KEY" \
  -e PLUGGEDIN_API_BASE_URL="YOUR_API_BASE_URL" \
  --name pluggedin-mcp-container \
  pluggedin-mcp-proxy:latest

Streamable HTTP Mode

Run the container as an HTTP server:

bash
docker run -d --rm \
  -e PLUGGEDIN_API_KEY="YOUR_API_KEY" \
  -e PLUGGEDIN_API_BASE_URL="YOUR_API_BASE_URL" \
  -p 12006:12006 \
  --name pluggedin-mcp-http \
  pluggedin-mcp-proxy:latest \
  --transport streamable-http --port 12006

Replace YOUR_API_KEY and YOUR_API_BASE_URL (if not using the default https://plugged.in).

Testing with MCP Inspector

While the container is running, you can connect to it using the MCP Inspector:

bash
npx @modelcontextprotocol/inspector docker://pluggedin-mcp-container

This will connect to the standard input/output of the running container.

Stopping the Container

Press Ctrl+C in the terminal where docker run is executing. The --rm flag ensures the container is removed automatically upon stopping.

๐Ÿ—๏ธ System Architecture

The plugged.in MCP Proxy Server acts as a bridge between MCP clients and multiple underlying MCP servers:

mermaid
sequenceDiagram
    participant MCPClient as MCP Client (e.g. Claude Desktop)
    participant PluggedinMCP as plugged.in MCP Proxy
    participant PluggedinApp as plugged.in App
    participant MCPServers as Underlying MCP Servers

    MCPClient ->> PluggedinMCP: Request list tools/resources/prompts
    PluggedinMCP ->> PluggedinApp: Get capabilities via API
    PluggedinApp ->> PluggedinMCP: Return capabilities (prefixed)

    MCPClient ->> PluggedinMCP: Call tool/read resource/get prompt
    alt Standard capability
        PluggedinMCP ->> PluggedinApp: Resolve capability to server
        PluggedinApp ->> PluggedinMCP: Return server details
        PluggedinMCP ->> MCPServers: Forward request to target server
        MCPServers ->> PluggedinMCP: Return response
    else Custom instruction
        PluggedinMCP ->> PluggedinApp: Get custom instruction
        PluggedinApp ->> PluggedinMCP: Return formatted messages
    end
    PluggedinMCP ->> MCPClient: Return response

    alt Discovery tool (Smart Caching)
        MCPClient ->> PluggedinMCP: Call pluggedin_discover_tools
        alt Cached data available
            PluggedinMCP ->> PluggedinApp: Check cached capabilities
            PluggedinApp ->> PluggedinMCP: Return cached tools/resources/prompts
            PluggedinMCP ->> MCPClient: Return instant results (static + dynamic)
        else Force refresh or no cache
            PluggedinMCP ->> PluggedinApp: Trigger background discovery
            PluggedinMCP ->> MCPClient: Return current tools + "discovery running"
            PluggedinApp ->> MCPServers: Connect and discover capabilities (background)
            MCPServers ->> PluggedinApp: Return fresh capabilities
        end
    end

๐Ÿ”„ Workflow

  1. Configuration: The proxy fetches server configurations from the plugged.in App
  2. Smart Discovery (pluggedin_discover_tools):
    • Cache Check: First checks for existing cached data (< 1 second)
    • Instant Response: Returns static tools + cached dynamic tools immediately
    • Background Refresh: For force_refresh=true, runs discovery in background while showing current tools
    • Fresh Discovery: Only runs full discovery if no cached data exists
  3. Capability Listing: The proxy fetches discovered capabilities from plugged.in App APIs
    • tools/list: Fetches from /api/tools (includes static + dynamic tools)
    • resources/list: Fetches from /api/resources
    • resource-templates/list: Fetches from /api/resource-templates
    • prompts/list: Fetches from /api/prompts and /api/custom-instructions, merges results
  4. Capability Resolution: The proxy resolves capabilities to target servers
    • tools/call: Parses prefix from tool name, looks up server in internal map
    • resources/read: Calls /api/resolve/resource?uri=... to get server details
    • prompts/get: Checks for custom instruction prefix or calls /api/resolve/prompt?name=...
  5. Request Routing: Requests are routed to the appropriate underlying MCP server
  6. Response Handling: Responses from the underlying servers are returned to the client

๐Ÿ”’ Security Features

The plugged.in MCP Proxy implements comprehensive security measures to protect your system and data:

Input Validation & Sanitization

  • Command Injection Prevention: All commands and arguments are validated against allowlists before execution
  • Environment Variable Security: Secure parsing of .env files with proper handling of quotes and multiline values
  • Token Validation: Strong regex patterns for API keys and authentication tokens (32-64 hex characters)

Network Security

  • SSRF Protection: URL validation blocks access to:
    • Localhost and loopback addresses (127.0.0.1, ::1)
    • Private IP ranges (10.x, 172.16-31.x, 192.168.x)
    • Link-local addresses (169.254.x)
    • Multicast and reserved ranges
    • Common internal service ports (SSH, databases, etc.)
  • Header Validation: Protection against header injection with:
    • Dangerous header blocking
    • RFC 7230 compliant header name validation
    • Control character detection
    • Header size limits (8KB max)
  • Rate Limiting:
    • Tool calls: 60 requests per minute
    • API calls: 100 requests per minute
  • Error Sanitization: Prevents information disclosure by sanitizing error messages

Process Security

  • Safe Command Execution: Uses execFile() instead of exec() to prevent shell injection
  • Command Allowlist: Only permits execution of:
    • node, npx - Node.js commands
    • python, python3 - Python commands
    • uv, uvx, uvenv - UV Python tools
  • Argument Sanitization: Removes shell metacharacters and control characters from all arguments
  • Environment Variable Validation: Only allows alphanumeric keys with underscores

Streamable HTTP Security

  • Lazy Authentication: Tool discovery doesn't require authentication, improving compatibility
  • Session Security: Cryptographically secure session ID generation
  • CORS Protection: Configurable CORS headers for web access
  • Request Size Limits: Prevents DoS through large payloads

Security Utilities

A dedicated security-utils.ts module provides:

  • Bearer token validation
  • URL validation with SSRF protection
  • Command argument sanitization
  • Environment variable validation
  • Rate limiting implementation
  • Error message sanitization

For detailed security implementation, see SECURITY.md.

๐Ÿงฉ Integration with plugged.in App

The plugged.in MCP Proxy Server is designed to work seamlessly with the plugged.in App, which provides:

  • A web-based interface for managing MCP server configurations
  • Centralized capability discovery (Tools, Resources, Templates, Prompts)
  • RAG v2 Document Library: Upload documents and enable AI-generated content with full attribution
  • Custom instructions management
  • Multi-workspace support for different configuration sets
  • An interactive playground for testing MCP tools with any AI model
  • User authentication and API key management
  • AI Document Exchange: Create, search, and manage documents with model attribution tracking

๐Ÿ“š Related Resources

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

๐Ÿ“ Recent Updates

Version 1.5.0 (January 2025) - RAG v2

๐Ÿค– AI Document Exchange

  • AI-Generated Documents: MCP servers can now create documents in your library with full AI attribution
  • Model Attribution Tracking: Complete history of which AI models created or updated each document
  • Advanced Document Search: Filter by AI model, provider, date, tags, and source type
  • Document Versioning: Track changes and maintain version history for AI-generated content
  • Multi-Source Support: Documents from uploads, AI generation, or API integrations

๐Ÿ” Enhanced RAG Capabilities

  • Semantic Search: Improved relevance scoring with PostgreSQL full-text search
  • Smart Filtering: Filter results by visibility, model attribution, and document source
  • Snippet Generation: Automatic snippet extraction with keyword highlighting
  • Performance Optimization: Faster queries with optimized indexing

Version 1.2.0 (January 2025)

๐Ÿ”’ Security Enhancements

  • URL Validation: Comprehensive SSRF protection blocking private IPs, localhost, and dangerous ports
  • Command Allowlisting: Only approved commands (node, npx, python, etc.) can be executed
  • Header Sanitization: Protection against header injection attacks
  • Lazy Authentication: Improved Smithery compatibility with auth-free tool discovery

๐Ÿš€ Performance Improvements

  • Optimized Docker Builds: Multi-stage builds for minimal container footprint
  • Production Dependencies Only: Test files and dev dependencies excluded from Docker images
  • Resource Efficiency: Designed for deployment in resource-constrained environments

๐Ÿ”ง Technical Improvements

  • Enhanced error handling in Streamable HTTP transport
  • Better session cleanup and memory management
  • Improved TypeScript types and code organization

Version 1.1.0 (December 2024)

๐Ÿš€ New Features

  • Streamable HTTP Support: Connect to downstream MCP servers using the modern Streamable HTTP transport
  • HTTP Server Mode: Run the proxy as an HTTP server for web-based access
  • Flexible Session Management: Choose between stateless or stateful modes
  • Authentication Options: Optional Bearer token authentication for HTTP endpoints
  • Health Monitoring: /health endpoint for service monitoring

๐Ÿ”ง Technical Improvements

  • Updated MCP SDK to v1.13.1 for latest protocol support
  • Added Express.js integration for HTTP server functionality
  • Enhanced TypeScript types for better developer experience

Version 1.0.0 (June 2025)

๐ŸŽฏ Major Features

  • Real-Time Notification System: Track all MCP activities with comprehensive notification support
  • RAG Integration: Support for document-enhanced queries through the plugged.in App
  • Inspector Scripts: New automated testing tools for debugging and development
  • Health Monitoring: Built-in ping endpoint for connection monitoring

๐Ÿ”’ Security Enhancements

  • Input Validation: Industry-standard validation and sanitization for all inputs
  • URL Security: Enhanced URL validation with SSRF protection
  • Environment Security: Secure parsing of environment variables with dotenv
  • Error Sanitization: Prevents information disclosure in error responses

๐Ÿ› Bug Fixes

  • Fixed JSON-RPC protocol interference (stdout vs stderr separation)
  • Resolved localhost URL validation for development environments
  • Fixed API key handling in inspector scripts
  • Improved connection stability and memory management

๐Ÿ”ง Developer Tools

  • New inspector scripts for automated testing
  • Improved error messages and debugging capabilities
  • Structured logging with proper stderr usage
  • Enhanced TypeScript type safety

See Release Notes for complete details.

๐Ÿงช Testing and Development

Local Development

Tests are included for development purposes but are excluded from Docker builds to minimize the container footprint.

bash
# Run tests locally
npm test
# or
./scripts/test-local.sh

# Run tests in watch mode
npm run test:watch

# Run tests with UI
npm run test:ui

Lightweight Docker Builds

The Docker image is optimized for minimal footprint:

  • Multi-stage build process
  • Only production dependencies in final image
  • Test files and dev dependencies excluded
  • Optimized for resource-constrained environments
bash
# Build optimized Docker image
docker build -t pluggedin-mcp .

# Check image size
docker images pluggedin-mcp

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgements

Star History

Star History Chart

Repository Owner

VeriTeknik
VeriTeknik

Organization

Repository Details

Language TypeScript
Default Branch main
Size 777 KB
Contributors 4
License Apache License 2.0
MCP Verified Sep 1, 2025

Programming Languages

TypeScript
93.51%
JavaScript
5.47%
Shell
0.73%
Dockerfile
0.29%

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • mcp

    mcp

    Universal remote MCP server connecting AI clients to productivity tools.

    WayStation MCP acts as a remote Model Context Protocol (MCP) server, enabling seamless integration between AI clients like Claude or Cursor and a wide range of productivity applications, such as Notion, Monday, Airtable, Jira, and more. It supports multiple secure connection transports and offers both general and user-specific preauthenticated endpoints. The platform emphasizes ease of integration, OAuth2-based authentication, and broad app compatibility. Users can manage their integrations through a user dashboard, simplifying complex workflow automations for AI-powered productivity.

    • โญ 27
    • MCP
    • waystation-ai/mcp
  • awslabs/mcp

    awslabs/mcp

    Specialized MCP servers for seamless AWS integration in AI and development environments.

    AWS MCP Servers is a suite of specialized servers implementing the open Model Context Protocol (MCP) to bridge large language model (LLM) applications with AWS services, tools, and data sources. It provides a standardized way for AI assistants, IDEs, and developer tools to access up-to-date AWS documentation, perform cloud operations, and automate workflows with context-aware intelligence. Featuring a broad catalog of domain-specific servers, quick installation for popular platforms, and both local and remote deployment options, it enhances cloud-native development, infrastructure management, and workflow automation for AI-driven tools. The project includes Docker, Lambda, and direct integration instructions for environments such as Amazon Q CLI, Cursor, Windsurf, Kiro, and VS Code.

    • โญ 6,220
    • MCP
    • awslabs/mcp
  • cloudflare/mcp-server-cloudflare

    cloudflare/mcp-server-cloudflare

    Connect Cloudflare services to Model Context Protocol (MCP) clients for AI-powered management.

    Cloudflare MCP Server enables integration between Cloudflare's suite of services and clients using the Model Context Protocol (MCP). It provides multiple specialized servers that allow AI models to access, analyze, and manage configurations, logs, analytics, and other features across Cloudflare's platform. Users can leverage natural language interfaces in compatible MCP clients to read data, gain insights, and perform automated actions on their Cloudflare accounts. This project aims to streamline the orchestration of security, development, monitoring, and infrastructure tasks through standardized MCP connections.

    • โญ 2,919
    • MCP
    • cloudflare/mcp-server-cloudflare
  • 1mcp-app/agent

    1mcp-app/agent

    A unified server that aggregates and manages multiple Model Context Protocol servers.

    1MCP Agent provides a single, unified interface that aggregates multiple Model Context Protocol (MCP) servers, enabling seamless integration and management of external tools for AI assistants. It acts as a proxy, managing server configuration, authentication, health monitoring, and dynamic server control with features like asynchronous loading, tag-based filtering, and advanced security options. Compatible with popular AI development environments, it simplifies setup by reducing redundant server instances and resource usage. Users can configure, monitor, and scale model tool integrations across various AI clients through easy CLI commands or Docker deployment.

    • โญ 96
    • MCP
    • 1mcp-app/agent
  • magg

    magg

    Meta-MCP aggregator and manager for LLM capability extension.

    Magg is a server that implements the Model Context Protocol (MCP), acting as a central aggregator and proxy for multiple MCP servers. It enables Large Language Models (LLMs) to dynamically discover, add, configure, and manage external tools at runtime. By aggregating tools from different MCP servers under unified namespaces, it streamlines capability management and introduces features such as configuration persistence, authentication, and real-time notifications. Magg offers both command-line and Docker deployment, with support for HTTP, stdio, and in-memory transport.

    • โญ 62
    • MCP
    • sitbon/magg
  • mcpmcp-server

    mcpmcp-server

    Seamlessly discover, set up, and integrate MCP servers with AI clients.

    mcpmcp-server enables users to discover, configure, and connect MCP servers with preferred clients, optimizing AI integration into daily workflows. It supports streamlined setup via JSON configuration, ensuring compatibility with various platforms such as Claude Desktop on macOS. The project simplifies the connection process between AI clients and remote Model Context Protocol servers. Users are directed to an associated homepage for further platform-specific guidance.

    • โญ 17
    • MCP
    • glenngillen/mcpmcp-server
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results