MCP Rubber Duck

MCP Rubber Duck

A bridge server for querying multiple OpenAI-compatible LLMs through the Model Context Protocol.

56
Stars
7
Forks
56
Watchers
0
Issues
MCP Rubber Duck acts as an MCP (Model Context Protocol) server that enables users to query and manage multiple OpenAI-compatible large language models from a unified API. It supports parallel querying of various providers, context management across sessions, failover between providers, and response caching. This tool is designed for debugging and experimentation by allowing users to receive diverse AI-driven perspectives from different model endpoints.

Key Features

Connects to any OpenAI-compatible API endpoint
Supports multiple LLM providers simultaneously
Maintains conversation context across sessions
Provides 'council' responses from multiple models in parallel
Caches responses to prevent redundant API calls
Automatic failover between LLM providers
Real-time health monitoring for all configured models
Security controls with per-server session-based approvals
Bridges to other MCP servers for extended capabilities
Customizable provider configurations via environment variables

Use Cases

Aggregating answers from multiple language models for better insights
Debugging through diverse AI perspectives (rubber duck debugging)
Providing automatic backup/failover for LLM API endpoints
Managing conversational context across multi-turn AI sessions
Developing applications requiring responses from multiple LLM vendors
Caching AI model responses to optimize costs and latency
Experimenting with open-source and proprietary language models in one interface
Monitoring performance and availability of deployed language models
Integrating custom or local language model endpoints with standardized APIs
Extending LLM-powered tools by connecting to other MCP-compliant servers

README

🦆 MCP Rubber Duck

An MCP (Model Context Protocol) server that acts as a bridge to query multiple OpenAI-compatible LLMs. Just like rubber duck debugging, explain your problems to various AI "ducks" and get different perspectives!

npm version Docker Image MCP Registry

     __
   <(o )___
    ( ._> /
     `---'  Quack! Ready to debug!

Features

  • 🔌 Universal OpenAI Compatibility: Works with any OpenAI-compatible API endpoint
  • 🦆 Multiple Ducks: Configure and query multiple LLM providers simultaneously
  • 💬 Conversation Management: Maintain context across multiple messages
  • 🏛️ Duck Council: Get responses from all your configured LLMs at once
  • 💾 Response Caching: Avoid duplicate API calls with intelligent caching
  • 🔄 Automatic Failover: Falls back to other providers if primary fails
  • 📊 Health Monitoring: Real-time health checks for all providers
  • 🔗 MCP Bridge: Connect ducks to other MCP servers for extended functionality
  • 🛡️ Granular Security: Per-server approval controls with session-based approvals
  • 🎨 Fun Duck Theme: Rubber duck debugging with personality!

Supported Providers

Any provider with an OpenAI-compatible API endpoint, including:

  • OpenAI (GPT-4, GPT-3.5)
  • Google Gemini (Gemini 2.5 Flash, Gemini 2.0 Flash)
  • Anthropic (via OpenAI-compatible endpoints)
  • Groq (Llama, Mixtral, Gemma)
  • Together AI (Llama, Mixtral, and more)
  • Perplexity (Online models with web search)
  • Anyscale (Open source models)
  • Azure OpenAI (Microsoft-hosted OpenAI)
  • Ollama (Local models)
  • LM Studio (Local models)
  • Custom (Any OpenAI-compatible endpoint)

Quick Start

For Claude Desktop Users

👉 Complete Claude Desktop setup instructions below in Claude Desktop Configuration

Installation

Prerequisites

  • Node.js 20 or higher
  • npm or yarn
  • At least one API key for a supported provider

Installation Methods

Option 1: Install from NPM

bash
npm install -g mcp-rubber-duck

Option 2: Install from Source

bash
# Clone the repository
git clone https://github.com/nesquikm/mcp-rubber-duck.git
cd mcp-rubber-duck

# Install dependencies
npm install

# Build the project
npm run build

# Run the server
npm start

Configuration

Method 1: Environment Variables

Create a .env file in the project root:

env
# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_DEFAULT_MODEL=gpt-4o-mini  # Optional: defaults to gpt-4o-mini

# Google Gemini
GEMINI_API_KEY=...
GEMINI_DEFAULT_MODEL=gemini-2.5-flash  # Optional: defaults to gemini-2.5-flash

# Groq
GROQ_API_KEY=gsk_...
GROQ_DEFAULT_MODEL=llama-3.3-70b-versatile  # Optional: defaults to llama-3.3-70b-versatile

# Ollama (Local)
OLLAMA_BASE_URL=http://localhost:11434/v1  # Optional
OLLAMA_DEFAULT_MODEL=llama3.2  # Optional: defaults to llama3.2

# Together AI
TOGETHER_API_KEY=...

# Custom Providers (you can add multiple)
# Format: CUSTOM_{NAME}_* where NAME becomes the provider key (lowercase)

# Example: Add provider "myapi"
CUSTOM_MYAPI_API_KEY=...
CUSTOM_MYAPI_BASE_URL=https://api.example.com/v1
CUSTOM_MYAPI_DEFAULT_MODEL=custom-model  # Optional
CUSTOM_MYAPI_MODELS=model1,model2        # Optional: comma-separated list
CUSTOM_MYAPI_NICKNAME=My Custom Duck     # Optional: display name

# Example: Add provider "azure" 
CUSTOM_AZURE_API_KEY=...
CUSTOM_AZURE_BASE_URL=https://mycompany.openai.azure.com/v1

# Global Settings
DEFAULT_PROVIDER=openai
DEFAULT_TEMPERATURE=0.7
LOG_LEVEL=info

# MCP Bridge Settings (Optional)
MCP_BRIDGE_ENABLED=true                      # Enable ducks to access external MCP servers
MCP_APPROVAL_MODE=trusted                    # always, trusted, or never
MCP_APPROVAL_TIMEOUT=300                     # seconds

# MCP Server: Context7 Documentation (Example)
MCP_SERVER_CONTEXT7_TYPE=http
MCP_SERVER_CONTEXT7_URL=https://mcp.context7.com/mcp
MCP_SERVER_CONTEXT7_ENABLED=true

# Per-server trusted tools
MCP_TRUSTED_TOOLS_CONTEXT7=*                 # Trust all Context7 tools

# Optional: Custom Duck Nicknames (Have fun with these!)
OPENAI_NICKNAME="DUCK-4"              # Optional: defaults to "GPT Duck"
GEMINI_NICKNAME="Duckmini"            # Optional: defaults to "Gemini Duck"
GROQ_NICKNAME="Quackers"              # Optional: defaults to "Groq Duck"
OLLAMA_NICKNAME="Local Quacker"       # Optional: defaults to "Local Duck"
CUSTOM_NICKNAME="My Special Duck"     # Optional: defaults to "Custom Duck"

Note: Duck nicknames are completely optional! If you don't set them, you'll get the charming defaults (GPT Duck, Gemini Duck, etc.). If you use a config.json file, those nicknames take priority over environment variables.

Method 2: Configuration File

Create a config/config.json file based on the example:

bash
cp config/config.example.json config/config.json
# Edit config/config.json with your API keys and preferences

Claude Desktop Configuration

This is the most common setup method for using MCP Rubber Duck with Claude Desktop.

Step 1: Build the Project

First, ensure the project is built:

bash
# Clone the repository
git clone https://github.com/nesquikm/mcp-rubber-duck.git
cd mcp-rubber-duck

# Install dependencies and build
npm install
npm run build

Step 2: Configure Claude Desktop

Edit your Claude Desktop config file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Add the MCP server configuration:

json
{
  "mcpServers": {
    "rubber-duck": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-rubber-duck/dist/index.js"],
      "env": {
        "MCP_SERVER": "true",
        "OPENAI_API_KEY": "your-openai-api-key-here",
        "OPENAI_DEFAULT_MODEL": "gpt-4o-mini",
        "GEMINI_API_KEY": "your-gemini-api-key-here", 
        "GEMINI_DEFAULT_MODEL": "gemini-2.5-flash",
        "DEFAULT_PROVIDER": "openai",
        "LOG_LEVEL": "info"
      }
    }
  }
}

Important: Replace the placeholder API keys with your actual keys:

  • your-openai-api-key-here → Your OpenAI API key (starts with sk-)
  • your-gemini-api-key-here → Your Gemini API key from Google AI Studio

Note: MCP_SERVER: "true" is required - this tells rubber-duck to run as an MCP server for any MCP client (not related to the MCP Bridge feature).

Step 3: Restart Claude Desktop

  1. Completely quit Claude Desktop (⌘+Q on Mac)
  2. Launch Claude Desktop again
  3. The MCP server should connect automatically

Step 4: Test the Integration

Once restarted, test these commands in Claude:

Check Duck Health

Use the list_ducks tool with check_health: true

Should show:

  • GPT Duck (openai) - Healthy
  • Gemini Duck (gemini) - Healthy

List Available Models

Use the list_models tool

Ask a Specific Duck

Use the ask_duck tool with prompt: "What is rubber duck debugging?", provider: "openai"

Compare Multiple Ducks

Use the compare_ducks tool with prompt: "Explain async/await in JavaScript"

Test Specific Models

Use the ask_duck tool with prompt: "Hello", provider: "openai", model: "gpt-4"

Troubleshooting Claude Desktop Setup

If Tools Don't Appear

  1. Check API Keys: Ensure your API keys are correctly entered without typos
  2. Verify Build: Run ls -la dist/index.js to confirm the project built successfully
  3. Check Logs: Look for errors in Claude Desktop's developer console
  4. Restart: Fully quit and restart Claude Desktop after config changes

Connection Issues

  1. Config File Path: Double-check you're editing the correct config file path
  2. JSON Syntax: Validate your JSON syntax (no trailing commas, proper quotes)
  3. Absolute Paths: Ensure you're using the full absolute path to dist/index.js
  4. File Permissions: Verify Claude Desktop can read the dist directory

Health Check Failures

If ducks show as unhealthy:

  1. API Keys: Verify keys are valid and have sufficient credits/quota
  2. Network: Check internet connection and firewall settings
  3. Rate Limits: Some providers have strict rate limits for new accounts

MCP Bridge - Connect to Other MCP Servers

The MCP Bridge allows your ducks to access tools from other MCP servers, extending their capabilities beyond just chat. Your ducks can now search documentation, access files, query APIs, and much more!

Note: This is different from the MCP server integration above:

  • MCP Bridge (MCP_BRIDGE_ENABLED): Ducks USE external MCP servers as clients
  • MCP Server (MCP_SERVER): Rubber-duck SERVES as an MCP server to any MCP client

Quick Setup

Add these environment variables to enable MCP Bridge:

bash
# Basic MCP Bridge Configuration
MCP_BRIDGE_ENABLED="true"                # Enable ducks to access external MCP servers
MCP_APPROVAL_MODE="trusted"              # always, trusted, or never
MCP_APPROVAL_TIMEOUT="300"               # 5 minutes

# Example: Context7 Documentation Server
MCP_SERVER_CONTEXT7_TYPE="http"
MCP_SERVER_CONTEXT7_URL="https://mcp.context7.com/mcp"
MCP_SERVER_CONTEXT7_ENABLED="true"

# Trust all Context7 tools (no approval needed)
MCP_TRUSTED_TOOLS_CONTEXT7="*"

Approval Modes

always: Every tool call requires approval (with session-based memory)

  • First use of a tool → requires approval
  • Subsequent uses of the same tool → automatic (until restart)

trusted: Only untrusted tools require approval

  • Tools in trusted lists execute immediately
  • Unknown tools require approval

never: All tools execute immediately (use with caution)

Per-Server Trusted Tools

Configure trust levels per MCP server for granular security:

bash
# Trust all tools from Context7 (documentation server)
MCP_TRUSTED_TOOLS_CONTEXT7="*"

# Trust specific filesystem operations only
MCP_TRUSTED_TOOLS_FILESYSTEM="read-file,list-directory"

# Trust specific GitHub tools
MCP_TRUSTED_TOOLS_GITHUB="get-repo-info,list-issues"

# Global fallback for servers without specific config
MCP_TRUSTED_TOOLS="common-safe-tool"

MCP Server Configuration

Configure MCP servers using environment variables:

HTTP Servers

bash
MCP_SERVER_{NAME}_TYPE="http"
MCP_SERVER_{NAME}_URL="https://api.example.com/mcp"
MCP_SERVER_{NAME}_API_KEY="your-api-key"        # Optional
MCP_SERVER_{NAME}_ENABLED="true"

STDIO Servers

bash
MCP_SERVER_{NAME}_TYPE="stdio"
MCP_SERVER_{NAME}_COMMAND="python"
MCP_SERVER_{NAME}_ARGS="/path/to/script.py,--arg1,--arg2"
MCP_SERVER_{NAME}_ENABLED="true"

Example: Enable Context7 Documentation

bash
# Enable MCP Bridge
MCP_BRIDGE_ENABLED="true"
MCP_APPROVAL_MODE="trusted"

# Configure Context7 server
MCP_SERVER_CONTEXT7_TYPE="http"
MCP_SERVER_CONTEXT7_URL="https://mcp.context7.com/mcp"
MCP_SERVER_CONTEXT7_ENABLED="true"

# Trust all Context7 tools
MCP_TRUSTED_TOOLS_CONTEXT7="*"

Now your ducks can search and retrieve documentation from Context7:

Ask: "Can you find React hooks documentation from Context7 and return only the key concepts?"
Duck: *searches Context7 and returns focused, essential React hooks information*

💡 Token Optimization Benefits

Smart Token Management: Ducks can retrieve comprehensive data from MCP servers but return only the essential information you need, saving tokens in your host LLM conversations:

  • Ask for specifics: "Find TypeScript interfaces documentation and return only the core concepts"
  • Duck processes full docs: Accesses complete documentation from Context7
  • Returns condensed results: Provides focused, relevant information while filtering out unnecessary details
  • Token savings: Reduces response size by 70-90% compared to raw documentation dumps

Example Workflow:

You: "Find Express.js routing concepts from Context7, keep it concise"
Duck: *Retrieves full Express docs, processes, and returns only routing essentials*
Result: 500 tokens instead of 5,000+ tokens of raw documentation

Session-Based Approvals

When using always mode, the system remembers your approvals:

  1. First time: "Duck wants to use search-docs - Approve? ✅"
  2. Next time: Duck uses search-docs automatically (no new approval needed)
  3. Different tool: "Duck wants to use get-examples - Approve? ✅"
  4. Restart: Session memory clears, start over

This eliminates approval fatigue while maintaining security!

Available Tools (Enhanced with MCP)

🦆 ask_duck

Ask a single question to a specific LLM provider. When MCP Bridge is enabled, ducks can automatically access tools from connected MCP servers.

typescript
{
  "prompt": "What is rubber duck debugging?",
  "provider": "openai",  // Optional, uses default if not specified
  "temperature": 0.7     // Optional
}

💬 chat_with_duck

Have a conversation with context maintained across messages.

typescript
{
  "conversation_id": "debug-session-1",
  "message": "Can you help me debug this code?",
  "provider": "groq"  // Optional, can switch providers mid-conversation
}

🧹 clear_conversations

Clear all conversation history and start fresh. Useful when switching topics or when context becomes too large.

typescript
{
  // No parameters required
}

📋 list_ducks

List all configured providers and their health status.

typescript
{
  "check_health": true  // Optional, performs fresh health check
}

📊 list_models

List available models for LLM providers.

typescript
{
  "provider": "openai",     // Optional, lists all if not specified
  "fetch_latest": false     // Optional, fetch latest from API vs cached
}

🔍 compare_ducks

Ask the same question to multiple providers simultaneously.

typescript
{
  "prompt": "What's the best programming language?",
  "providers": ["openai", "groq", "ollama"]  // Optional, uses all if not specified
}

🏛️ duck_council

Get responses from all configured ducks - like a panel discussion!

typescript
{
  "prompt": "How should I architect a microservices application?"
}

Usage Examples

Basic Query

javascript
// Ask the default duck
await ask_duck({ 
  prompt: "Explain async/await in JavaScript" 
});

Conversation

javascript
// Start a conversation
await chat_with_duck({
  conversation_id: "learning-session",
  message: "What is TypeScript?"
});

// Continue the conversation
await chat_with_duck({
  conversation_id: "learning-session", 
  message: "How does it differ from JavaScript?"
});

Compare Responses

javascript
// Get different perspectives
await compare_ducks({
  prompt: "What's the best way to handle errors in Node.js?",
  providers: ["openai", "groq", "ollama"]
});

Duck Council

javascript
// Convene the council for important decisions
await duck_council({
  prompt: "Should I use REST or GraphQL for my API?"
});

Provider-Specific Setup

Ollama (Local)

bash
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3.2

# Ollama automatically provides OpenAI-compatible endpoint at localhost:11434/v1

LM Studio (Local)

  1. Download LM Studio from https://lmstudio.ai/
  2. Load a model in LM Studio
  3. Start the local server (provides OpenAI-compatible endpoint at localhost:1234/v1)

Google Gemini

  1. Get API key from Google AI Studio
  2. Add to environment: GEMINI_API_KEY=...
  3. Uses OpenAI-compatible endpoint (beta)

Groq

  1. Get API key from https://console.groq.com/keys
  2. Add to environment: GROQ_API_KEY=gsk_...

Together AI

  1. Get API key from https://api.together.xyz/
  2. Add to environment: TOGETHER_API_KEY=...

Verifying OpenAI Compatibility

To check if a provider is OpenAI-compatible:

  1. Look for /v1/chat/completions endpoint in their API docs
  2. Check if they support the OpenAI SDK
  3. Test with curl:
bash
curl -X POST "https://api.provider.com/v1/chat/completions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "model-name",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Development

Run in Development Mode

bash
npm run dev

Run Tests

bash
npm test

Lint Code

bash
npm run lint

Type Checking

bash
npm run typecheck

Docker Support

MCP Rubber Duck provides multi-platform Docker support, working on macOS (Intel & Apple Silicon), Linux (x86_64 & ARM64), Windows (WSL2), and Raspberry Pi 3+.

Quick Start with Pre-built Image

The easiest way to get started is with our pre-built multi-architecture image:

bash
# Pull the image (works on all platforms)
docker pull ghcr.io/nesquikm/mcp-rubber-duck:latest

# Create environment file
cp .env.template .env
# Edit .env and add your API keys

# Run with Docker Compose (recommended)
docker compose up -d

Platform-Specific Deployment

🖥️ Desktop/Server (macOS, Linux, Windows)

bash
# Use desktop-optimized settings
./scripts/deploy.sh --platform desktop

# Or with more resources and local AI
./scripts/deploy.sh --platform desktop --profile with-ollama

🥧 Raspberry Pi

bash
# Use Pi-optimized settings (memory limits, etc.)
./scripts/deploy.sh --platform pi

# Or copy optimized config directly
cp .env.pi.example .env
# Edit .env and add your API keys
docker compose up -d

🌐 Remote Deployment via SSH

bash
# Deploy to remote Raspberry Pi
./scripts/deploy.sh --mode ssh --ssh-host pi@192.168.1.100

Universal Deployment Script

The scripts/deploy.sh script auto-detects your platform and applies optimal settings:

bash
# Auto-detect platform and deploy
./scripts/deploy.sh

# Options:
./scripts/deploy.sh --help

Available options:

  • --mode: docker (default), local, or ssh
  • --platform: pi, desktop, or auto (default)
  • --profile: lightweight, desktop, with-ollama
  • --ssh-host: For remote deployment

Platform-Specific Configuration

Raspberry Pi (Memory-Optimized)

env
# .env.pi.example - Optimized for Pi 3+
DOCKER_CPU_LIMIT=1.5
DOCKER_MEMORY_LIMIT=512M
NODE_OPTIONS=--max-old-space-size=256

Desktop/Server (High-Performance)

env
# .env.desktop.example - Optimized for powerful systems
DOCKER_CPU_LIMIT=4.0
DOCKER_MEMORY_LIMIT=2G
NODE_OPTIONS=--max-old-space-size=1024

Docker Compose Profiles

bash
# Default profile (lightweight, good for Pi)
docker compose up -d

# Desktop profile (higher resource limits)
docker compose --profile desktop up -d

# With local Ollama AI
docker compose --profile with-ollama up -d

Build Multi-Architecture Images

For developers who want to build and publish their own multi-architecture images:

bash
# Build for AMD64 + ARM64
./scripts/build-multiarch.sh --platforms linux/amd64,linux/arm64

# Build and push to GitHub Container Registry
./scripts/gh-deploy.sh --public

Claude Desktop with Remote Docker

Connect Claude Desktop to MCP Rubber Duck running on a remote system:

json
{
  "mcpServers": {
    "rubber-duck-remote": {
      "command": "ssh",
      "args": [
        "user@remote-host",
        "docker exec -i mcp-rubber-duck node /app/dist/index.js"
      ]
    }
  }
}

Platform Compatibility

Platform Architecture Status Notes
macOS Intel AMD64 ✅ Full Via Docker Desktop
macOS Apple Silicon ARM64 ✅ Full Native ARM64 support
Linux x86_64 AMD64 ✅ Full Direct Docker support
Linux ARM64 ARM64 ✅ Full Servers, Pi 4+
Raspberry Pi 3+ ARM64 ✅ Optimized Memory-limited config
Windows AMD64 ✅ Full Via Docker Desktop + WSL2

Manual Docker Commands

If you prefer not to use docker-compose:

bash
# Raspberry Pi
docker run -d \
  --name mcp-rubber-duck \
  --memory=512m --cpus=1.5 \
  --env-file .env \
  --restart unless-stopped \
  ghcr.io/nesquikm/mcp-rubber-duck:latest

# Desktop/Server
docker run -d \
  --name mcp-rubber-duck \
  --memory=2g --cpus=4 \
  --env-file .env \
  --restart unless-stopped \
  ghcr.io/nesquikm/mcp-rubber-duck:latest

Architecture

mcp-rubber-duck/
├── src/
│   ├── server.ts           # MCP server implementation
│   ├── config/             # Configuration management
│   ├── providers/          # OpenAI client wrapper
│   ├── tools/              # MCP tool implementations
│   ├── services/           # Health, cache, conversations
│   └── utils/              # Logging, ASCII art
├── config/                 # Configuration examples
└── tests/                  # Test suites

Troubleshooting

Provider Not Working

  1. Check API key is correctly set
  2. Verify endpoint URL is correct
  3. Run health check: list_ducks({ check_health: true })
  4. Check logs for detailed error messages

Connection Issues

  • For local providers (Ollama, LM Studio), ensure they're running
  • Check firewall settings for local endpoints
  • Verify network connectivity to cloud providers

Rate Limiting

  • Enable caching to reduce API calls
  • Configure failover to alternate providers
  • Adjust max_retries and timeout settings

Contributing

🦆 Want to help make our duck pond better?

We love contributions! Whether you're fixing bugs, adding features, or teaching our ducks new tricks, we'd love to have you join the flock.

Check out our Contributing Guide to get started. We promise it's more fun than a regular contributing guide - it has ducks! 🦆

Quick start for contributors:

  1. Fork the repository
  2. Create a feature branch
  3. Follow our conventional commit guidelines
  4. Add tests for new functionality
  5. Submit a pull request

License

MIT License - see LICENSE file for details

Acknowledgments

  • Inspired by the rubber duck debugging method
  • Built on the Model Context Protocol (MCP)
  • Uses OpenAI SDK for universal compatibility

📝 Changelog

See CHANGELOG.md for a detailed history of changes and releases.

📦 Registry & Directory

MCP Rubber Duck is available through multiple channels:

Support


🦆 Happy Debugging with your AI Duck Panel! 🦆

Star History

Star History Chart

Repository Owner

nesquikm
nesquikm

User

Repository Details

Language TypeScript
Default Branch master
Size 275 KB
Contributors 2
License MIT License
MCP Verified Nov 11, 2025

Programming Languages

TypeScript
73.58%
Shell
20.07%
JavaScript
5.89%
Dockerfile
0.46%

Tags

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Cross-LLM MCP Server

    Cross-LLM MCP Server

    Unified MCP server for accessing and combining multiple LLM APIs.

    Cross-LLM MCP Server is a Model Context Protocol (MCP) server enabling seamless access to a range of Large Language Model APIs including ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, and Mistral. It provides a unified interface for invoking different LLMs from any MCP-compatible client, allowing users to call and aggregate responses across providers. The server implements eight specialized tools for interacting with these LLMs, each offering configurable options like model selection, temperature, and token limits. Output includes model context details as well as token usage statistics for each response.

    • 9
    • MCP
    • JamesANZ/cross-llm-mcp
  • MCP CLI

    MCP CLI

    A powerful CLI for seamless interaction with Model Context Protocol servers and advanced LLMs.

    MCP CLI is a modular command-line interface designed for interacting with Model Context Protocol (MCP) servers and managing conversations with large language models. It integrates with the CHUK Tool Processor and CHUK-LLM to provide real-time chat, interactive command shells, and automation capabilities. The system supports a wide array of AI providers and models, advanced tool usage, context management, and performance metrics. Rich output formatting, concurrent tool execution, and flexible configuration make it suitable for both end-users and developers.

    • 1,755
    • MCP
    • chrishayuk/mcp-cli
  • Open Data Model Context Protocol

    Open Data Model Context Protocol

    Easily connect open data providers to LLMs using a Model Context Protocol server and CLI.

    Open Data Model Context Protocol enables seamless integration of open public datasets into Large Language Model (LLM) applications, starting with support for Claude. Through a CLI tool and server, users can access and query public data providers within their LLM clients. It also offers tools and templates for contributors to publish and distribute new open datasets, making data discoverable and actionable for LLM queries.

    • 140
    • MCP
    • OpenDataMCP/OpenDataMCP
  • Wanaku MCP Router

    Wanaku MCP Router

    A router connecting AI-enabled applications through the Model Context Protocol.

    Wanaku MCP Router serves as a middleware router facilitating standardized context exchange between AI-enabled applications and large language models via the Model Context Protocol (MCP). It streamlines context provisioning, allowing seamless integration and communication in multi-model AI environments. The tool aims to unify and optimize the way applications provide relevant context to LLMs, leveraging open protocol standards.

    • 87
    • MCP
    • wanaku-ai/wanaku
  • Lara Translate MCP Server

    Lara Translate MCP Server

    Context-aware translation server implementing the Model Context Protocol.

    Lara Translate MCP Server enables AI applications to seamlessly access professional translation services via the standardized Model Context Protocol. It supports features such as language detection, context-aware translations, and translation memory integration. The server acts as a secure bridge between AI models and Lara Translate, managing credentials and facilitating structured translation requests and responses.

    • 76
    • MCP
    • translated/lara-mcp
  • Unichat MCP Server

    Unichat MCP Server

    Universal MCP server providing context-aware AI chat and code tools across major model vendors.

    Unichat MCP Server enables sending standardized requests to leading AI model vendors, including OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, and Inception, utilizing the Model Context Protocol. It features unified endpoints for chat interactions and provides specialized tools for code review, documentation generation, code explanation, and programmatic code reworking. The server is designed for seamless integration with platforms like Claude Desktop and installation via Smithery. Vendor API keys are required for secure access to supported providers.

    • 37
    • MCP
    • amidabuddha/unichat-mcp-server
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results