Multi LLM Cross-Check MCP Server
A Model Control Protocol (MCP) server that allows cross-checking responses from multiple LLM providers simultaneously. This server integrates with Claude Desktop as an MCP server to provide a unified interface for querying different LLM APIs.
Features
- Query multiple LLM providers in parallel
- Currently supports:
- OpenAI (ChatGPT)
- Anthropic (Claude)
- Perplexity AI
- Google (Gemini)
- Asynchronous parallel processing for faster responses
- Easy integration with Claude Desktop
Prerequisites
- Python 3.8 or higher
- API keys for the LLM providers you want to use
- uv package manager (install with
pip install uv
)
Installation
- Clone this repository:
git clone https://github.com/lior-ps/multi-llm-cross-check-mcp-server.git
cd multi-llm-cross-check-mcp-server
- Initialize uv environment and install requirements:
uv venv
uv pip install -r requirements.txt
Configure in Claude Desktop: Create a file named
claude_desktop_config.json
in your Claude Desktop configuration directory with the following content:{ "mcp_servers": [ { "command": "uv", "args": [ "--directory", "/multi-llm-cross-check-mcp-server", "run", "main.py" ], "env": { "OPENAI_API_KEY": "your_openai_key", // Get from https://platform.openai.com/api-keys "ANTHROPIC_API_KEY": "your_anthropic_key", // Get from https://console.anthropic.com/account/keys "PERPLEXITY_API_KEY": "your_perplexity_key", // Get from https://www.perplexity.ai/settings/api "GEMINI_API_KEY": "your_gemini_key" // Get from https://makersuite.google.com/app/apikey } } ] }
Notes:
- You only need to add the API keys for the LLM providers you want to use. The server will skip any providers without configured API keys.
- You may need to put the full path to the uv executable in the command field. You can get this by running
which uv
on MacOS/Linux orwhere uv
on Windows.
Using the MCP Server
Once configured:
- The server will automatically start when you open Claude Desktop
- You can use the
cross_check
tool in your conversations by asking to “cross check with other LLMs” - Provide a prompt, and it will return responses from all configured LLM providers
API Response Format
The server returns a dictionary with responses from each LLM provider:
{
"ChatGPT": { ... },
"Claude": { ... },
"Perplexity": { ... },
"Gemini": { ... }
}
Error Handling
- If an API key is not provided for a specific LLM, that provider will be skipped
- API errors are caught and returned in the response
- Each LLM’s response is independent, so errors with one provider won’t affect others
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Multi LLM Cross-Check Server
Project Details
- lior-ps/multi-llm-cross-check-mcp-server
- MIT License
- Last Updated: 4/15/2025
Recomended MCP Servers
A simple and clear example for implementation and understanding Anthropic MCP (on AWS Bedrock).
An MCP server for people who surf waves and the web.
An MCP server that integrates with the MCP protocol. https://modelcontextprotocol.io/introduction
A lightweight Model Context Protocol (MCP) server that enables natural language interaction with your Todoist tasks directly from...
MCP server for DuckDB and MotherDuck
Model Context Protocol (MCP) server for @glideapps API
MCP Server for skrape.ai, lets you input any URL and it returns clean markdown for the LLM
APISIX Model Context Protocol (MCP) server is used to bridge large language models (LLMs) with the APISIX Admin...
The Ultimate Model Context Protocol (MCP) Server, providing unified access to a wide variety of useful and powerful...
An implementation of Giphy integration with Model Context Protocol
Virtual traveler library for MCP
MCP server for Redmine