Cross-LLM MCP Server
Unified MCP server for accessing and combining multiple LLM APIs.
Key Features
Use Cases
README
Cross-LLM MCP Server
A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, and Mistral. This allows you to call different LLMs from within any MCP-compatible client and combine their responses.
Features
This MCP server offers eight specialized tools for interacting with different LLM providers:
🤖 Individual LLM Tools
call-chatgpt
Call OpenAI's ChatGPT API with a prompt.
Input:
prompt(string): The prompt to send to ChatGPTmodel(optional, string): ChatGPT model to use (default: gpt-4)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- ChatGPT response with model information and token usage statistics
Example:
ChatGPT Response
Model: gpt-4
Here's a comprehensive explanation of quantum computing...
---
Usage:
- Prompt tokens: 15
- Completion tokens: 245
- Total tokens: 260
call-claude
Call Anthropic's Claude API with a prompt.
Input:
prompt(string): The prompt to send to Claudemodel(optional, string): Claude model to use (default: claude-3-sonnet-20240229)temperature(optional, number): Temperature for response randomness (0-1, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- Claude response with model information and token usage statistics
call-deepseek
Call DeepSeek API with a prompt.
Input:
prompt(string): The prompt to send to DeepSeekmodel(optional, string): DeepSeek model to use (default: deepseek-chat)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- DeepSeek response with model information and token usage statistics
call-gemini
Call Google's Gemini API with a prompt.
Input:
prompt(string): The prompt to send to Geminimodel(optional, string): Gemini model to use (default: gemini-2.5-flash)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- Gemini response with model information and token usage statistics
call-grok
Call xAI's Grok API with a prompt.
Input:
prompt(string): The prompt to send to Grokmodel(optional, string): Grok model to use (default: grok-3)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- Grok response with model information and token usage statistics
call-kimi
Call Moonshot AI's Kimi API with a prompt.
Input:
prompt(string): The prompt to send to Kimimodel(optional, string): Kimi model to use (default: moonshot-v1-8k)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- Kimi response with model information and token usage statistics
call-perplexity
Call Perplexity AI's API with a prompt.
Input:
prompt(string): The prompt to send to Perplexitymodel(optional, string): Perplexity model to use (default: sonar-pro)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- Perplexity response with model information and token usage statistics
call-mistral
Call Mistral AI's API with a prompt.
Input:
prompt(string): The prompt to send to Mistralmodel(optional, string): Mistral model to use (default: mistral-large-latest)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- Mistral response with model information and token usage statistics
🔄 Combined Tools
call-all-llms
Call all available LLM APIs (ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral) with the same prompt and get combined responses.
Input:
prompt(string): The prompt to send to all LLMstemperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- Combined responses from all LLMs with individual model information and usage statistics
- Summary of successful responses and total tokens used
Example:
Multi-LLM Response
Prompt: Explain quantum computing in simple terms
---
## CHATGPT
Model: gpt-4
Quantum computing is like having a super-powered computer...
---
## CLAUDE
Model: claude-3-sonnet-20240229
Quantum computing represents a fundamental shift...
---
## DEEPSEEK
Model: deepseek-chat
Quantum computing harnesses the principles of quantum mechanics...
---
## GEMINI
Model: gemini-2.5-flash
Quantum computing is a revolutionary approach to computation...
---
Summary:
- Successful responses: 4/4
- Total tokens used: 1650
call-llm
Call a specific LLM provider by name.
Input:
provider(string): The LLM provider to call ("chatgpt", "claude", "deepseek", or "gemini")prompt(string): The prompt to send to the LLMmodel(optional, string): Model to use (uses provider default if not specified)temperature(optional, number): Temperature for response randomness (0-2, default: 0.7)max_tokens(optional, number): Maximum tokens in response (default: 1000)
Output:
- Response from the specified LLM with model information and usage statistics
Installation
- Clone this repository:
git clone <repository-url>
cd cross-llm-mcp
- Install dependencies:
npm install
- Build the project:
npm run build
Getting API Keys
OpenAI/ChatGPT
- Visit OpenAI Platform
- Sign up or log in to your account
- Create a new API key
- Add it to your
.envfile asOPENAI_API_KEY
Anthropic/Claude
- Visit Anthropic Console
- Sign up or log in to your account
- Create a new API key
- Add it to your
.envfile asANTHROPIC_API_KEY
DeepSeek
- Visit DeepSeek Platform
- Sign up or log in to your account
- Create a new API key
- Add it to your
.envfile asDEEPSEEK_API_KEY
Google Gemini
- Visit Google AI Studio
- Sign up or log in to your Google account
- Create a new API key
- Add it to your Claude Desktop configuration as
GEMINI_API_KEY
xAI/Grok
- Visit xAI Platform
- Sign up or log in to your account
- Create a new API key
- Add it to your Claude Desktop configuration as
XAI_API_KEY
Moonshot AI/Kimi
- Visit Moonshot AI Platform
- Sign up or log in to your account
- Create a new API key
- Add it to your Claude Desktop configuration as
KIMI_API_KEY
Perplexity AI
- Visit the Perplexity AI Platform
- Sign up or log in to your account
- Generate a new API key from the developer console
- Add it to your Claude Desktop configuration as
PERPLEXITY_API_KEY
Mistral AI
- Visit the Mistral AI Console
- Sign up or log in to your account
- Create a new API key
- Add it to your Claude Desktop configuration as
MISTRAL_API_KEY
Usage
Configuring Claude Desktop
Add the following configuration to your Claude Desktop MCP settings:
{
"cross-llm-mcp": {
"command": "node",
"args": ["/path/to/your/cross-llm-mcp/build/index.js"],
"cwd": "/path/to/your/cross-llm-mcp",
"env": {
"OPENAI_API_KEY": "your_openai_api_key_here",
"ANTHROPIC_API_KEY": "your_anthropic_api_key_here",
"DEEPSEEK_API_KEY": "your_deepseek_api_key_here",
"GEMINI_API_KEY": "your_gemini_api_key_here",
"XAI_API_KEY": "your_grok_api_key_here",
"KIMI_API_KEY": "your_kimi_api_key_here",
"PERPLEXITY_API_KEY": "your_perplexity_api_key_here",
"MISTRAL_API_KEY": "your_mistral_api_key_here"
}
}
}
Replace the paths and API keys with your actual values:
- Update the
argspath to point to yourbuild/index.jsfile - Update the
cwdpath to your project directory - Add your actual API keys to the
envsection
Running the Server
The server runs automatically when configured in Claude Desktop. You can also run it manually:
npm start
The server runs on stdio and can be connected to any MCP-compatible client.
Example Queries
Here are some example queries you can make with this MCP server:
Call ChatGPT
{
"tool": "call-chatgpt",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"temperature": 0.7,
"max_tokens": 500
}
}
Call Claude
{
"tool": "call-claude",
"arguments": {
"prompt": "What are the benefits of renewable energy?",
"model": "claude-3-sonnet-20240229"
}
}
Call All LLMs
{
"tool": "call-all-llms",
"arguments": {
"prompt": "Write a short poem about artificial intelligence",
"temperature": 0.8
}
}
Call Specific LLM
{
"tool": "call-llm",
"arguments": {
"provider": "deepseek",
"prompt": "Explain machine learning algorithms",
"max_tokens": 800
}
}
Call Gemini
{
"tool": "call-gemini",
"arguments": {
"prompt": "Write a creative story about AI",
"model": "gemini-2.5-flash",
"temperature": 0.9
}
}
Call Grok
{
"tool": "call-grok",
"arguments": {
"prompt": "Tell me a joke about programming",
"model": "grok-3",
"temperature": 0.8
}
}
Call Kimi
{
"tool": "call-kimi",
"arguments": {
"prompt": "Summarise the plot of The Matrix in two sentences",
"model": "moonshot-v1-8k",
"temperature": 0.7
}
}
Call Perplexity
{
"tool": "call-perplexity",
"arguments": {
"prompt": "Summarize the latest AI research highlights in two paragraphs",
"model": "sonar-medium-online",
"temperature": 0.6
}
}
Call Mistral
{
"tool": "call-mistral",
"arguments": {
"prompt": "Draft a concise product update for stakeholders",
"model": "mistral-large-latest",
"temperature": 0.7
}
}
Use Cases
1. Multi-Perspective Analysis
Use call-all-llms to get different perspectives on the same topic from multiple AI models.
2. Model Comparison
Compare responses from different LLMs to understand their strengths and weaknesses.
3. Redundancy and Reliability
If one LLM is unavailable, you can still get responses from other providers.
4. Cost Optimization
Choose the most cost-effective LLM for your specific use case.
5. Quality Assurance
Cross-reference responses from multiple models to validate information.
Configuration
Claude Desktop Setup
The recommended way to use this MCP server is through Claude Desktop with environment variables configured directly in the MCP settings:
{
"cross-llm-mcp": {
"command": "node",
"args": [
"/Users/jamessangalli/Documents/projects/cross-llm-mcp/build/index.js"
],
"cwd": "/Users/jamessangalli/Documents/projects/cross-llm-mcp",
"env": {
"OPENAI_API_KEY": "sk-proj-your-openai-key-here",
"ANTHROPIC_API_KEY": "sk-ant-your-anthropic-key-here",
"DEEPSEEK_API_KEY": "sk-your-deepseek-key-here",
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
Environment Variables
The server reads the following environment variables:
OPENAI_API_KEY: Your OpenAI API keyANTHROPIC_API_KEY: Your Anthropic API keyDEEPSEEK_API_KEY: Your DeepSeek API keyGEMINI_API_KEY: Your Google Gemini API keyXAI_API_KEY: Your xAI Grok API keyKIMI_API_KEY: Your Moonshot AI Kimi API keyPERPLEXITY_API_KEY: Your Perplexity AI API keyMISTRAL_API_KEY: Your Mistral AI API keyDEFAULT_CHATGPT_MODEL: Default ChatGPT model (default: gpt-4)DEFAULT_CLAUDE_MODEL: Default Claude model (default: claude-3-sonnet-20240229)DEFAULT_DEEPSEEK_MODEL: Default DeepSeek model (default: deepseek-chat)DEFAULT_GEMINI_MODEL: Default Gemini model (default: gemini-2.5-flash)DEFAULT_GROK_MODEL: Default Grok model (default: grok-3)DEFAULT_KIMI_MODEL: Default Kimi model (default: moonshot-v1-8k)DEFAULT_PERPLEXITY_MODEL: Default Perplexity model (default: sonar-pro)DEFAULT_MISTRAL_MODEL: Default Mistral model (default: mistral-large-latest)
API Endpoints
This MCP server uses the following API endpoints:
- OpenAI:
https://api.openai.com/v1/chat/completions - Anthropic:
https://api.anthropic.com/v1/messages - DeepSeek:
https://api.deepseek.com/v1/chat/completions - Google Gemini:
https://generativelanguage.googleapis.com/v1/models/{model}:generateContent - xAI Grok:
https://api.x.ai/v1/chat/completions - Moonshot AI Kimi:
https://api.moonshot.ai/v1/chat/completions - Perplexity AI:
https://api.perplexity.ai/chat/completions - Mistral AI:
https://api.mistral.ai/v1/chat/completions
Error Handling
The server includes comprehensive error handling with detailed messages:
Missing API Key
**ChatGPT Error:** OpenAI API key not configured
Invalid API Key
**Claude Error:** Claude API error: Invalid API key - please check your Anthropic API key
Rate Limiting
**DeepSeek Error:** DeepSeek API error: Rate limit exceeded - please try again later
Payment Issues
**ChatGPT Error:** ChatGPT API error: Payment required - please check your OpenAI billing
Network Issues
**Claude Error:** Claude API error: Network timeout
Supported Models
ChatGPT Models
gpt-4gpt-4-turbogpt-3.5-turbo- And other OpenAI models
Claude Models
claude-3-sonnet-20240229claude-3-opus-20240229claude-3-haiku-20240307- And other Anthropic models
DeepSeek Models
deepseek-chatdeepseek-coder- And other DeepSeek models
Gemini Models
gemini-2.5-flash(default)gemini-2.5-progemini-2.0-flashgemini-2.0-flash-001- And other Google Gemini models
Grok Models
grok-3(default)- And other xAI Grok models
Kimi Models
moonshot-v1-8k(default)moonshot-v1-32kmoonshot-v1-128k- And other Moonshot AI Kimi models
Perplexity Models
sonar-pro(default)sonar-small-onlinesonar-medium- And other Perplexity models
Mistral Models
mistral-large-latest(default)mistral-small-latestmixtral-8x7b-32768- And other Mistral models
Project Structure
cross-llm-mcp/
├── src/
│ ├── index.ts # Main MCP server with all 8 tools
│ ├── types.ts # TypeScript type definitions
│ └── llm-clients.ts # LLM API client implementations
├── build/ # Compiled JavaScript output
├── env.example # Environment variables template
├── example-usage.md # Detailed usage examples
├── package.json # Project dependencies and scripts
└── README.md # This file
Dependencies
@modelcontextprotocol/sdk- MCP SDK for server implementationsuperagent- HTTP client for API requestszod- Schema validation for tool parameters
Development
Building the Project
npm run build
Adding New LLM Providers
To add a new LLM provider:
- Add the provider type to
src/types.ts - Implement the client in
src/llm-clients.ts - Add the tool to
src/index.ts - Update the
callAllLLMsmethod to include the new provider
Troubleshooting
Common Issues
Server won't start
- Check that all dependencies are installed:
npm install - Verify the build was successful:
npm run build - Ensure the
.envfile exists and has valid API keys
API errors
- Verify your API keys are correct and active
- Check your API usage limits and billing status
- Ensure you're using supported model names
No responses
- Check that at least one API key is configured
- Verify network connectivity
- Look for error messages in the response
Debug Mode
For debugging, you can run the server directly:
node build/index.js
License
This project is licensed under the MIT License - see the LICENSE.md file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
If you encounter any issues or have questions, please:
- Check the troubleshooting section above
- Review the error messages for specific guidance
- Ensure your API keys are properly configured
- Verify your network connectivity
Star History
Repository Owner
User
Repository Details
Programming Languages
Tags
Topics
Join Our Newsletter
Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.
Related MCPs
Discover similar Model Context Protocol servers
MCP CLI
A powerful CLI for seamless interaction with Model Context Protocol servers and advanced LLMs.
MCP CLI is a modular command-line interface designed for interacting with Model Context Protocol (MCP) servers and managing conversations with large language models. It integrates with the CHUK Tool Processor and CHUK-LLM to provide real-time chat, interactive command shells, and automation capabilities. The system supports a wide array of AI providers and models, advanced tool usage, context management, and performance metrics. Rich output formatting, concurrent tool execution, and flexible configuration make it suitable for both end-users and developers.
- ⭐ 1,755
- MCP
- chrishayuk/mcp-cli
@dealx/mcp-server
MCP server enabling LLMs to search and interact with the DealX platform.
Implements the Model Context Protocol, providing a standardized interface for large language models to interact with the DealX platform. Supports searching for ads through structured prompts and is designed for easy integration with tools like Claude and VS Code extensions. Flexible configuration options are available for environment variables, logging, and deployment. Extensible architecture supports future feature additions beyond ad search.
- ⭐ 0
- MCP
- DealExpress/mcp-server
Notion MCP Server
Enable LLMs to interact with Notion using the Model Context Protocol.
Notion MCP Server allows large language models to interface with Notion workspaces through a Model Context Protocol server, supporting both data retrieval and editing capabilities. It includes experimental Markdown conversion to optimize token usage for more efficient communication with LLMs. The server can be configured with environment variables and controlled for specific tool access. Integration with applications like Claude Desktop is supported for seamless automation.
- ⭐ 834
- MCP
- suekou/mcp-notion-server
Teamwork MCP Server
Seamless Teamwork.com integration for Large Language Models via the Model Context Protocol
Teamwork MCP Server is an implementation of the Model Context Protocol (MCP) that enables Large Language Models to interact securely and programmatically with Teamwork.com. It offers standardized interfaces, including HTTP and STDIO, allowing AI agents to perform various project management operations. The server supports multiple authentication methods, an extensible toolset architecture, and is designed for production deployments. It provides read-only capability for safe integrations and robust observability features.
- ⭐ 11
- MCP
- Teamwork/mcp
Lara Translate MCP Server
Context-aware translation server implementing the Model Context Protocol.
Lara Translate MCP Server enables AI applications to seamlessly access professional translation services via the standardized Model Context Protocol. It supports features such as language detection, context-aware translations, and translation memory integration. The server acts as a secure bridge between AI models and Lara Translate, managing credentials and facilitating structured translation requests and responses.
- ⭐ 76
- MCP
- translated/lara-mcp
Parallel Search MCP
Integrate Parallel Search API with any MCP-compatible LLM client.
Parallel Search MCP provides an interface to use the Parallel Search API seamlessly from any Model Context Protocol (MCP)-compatible language model client. It serves as a proxy server that connects requests to the search API, adding the necessary support for authentication and MCP compatibility. The tool is designed for everyday web search tasks and facilitates easy web integration for LLMs via standardized MCP infrastructure.
- ⭐ 3
- MCP
- parallel-web/search-mcp
Didn't find tool you were looking for?