LLM Responses MCP Server
A Model Context Protocol (MCP) server that enables collaborative debates between multiple AI agents, allowing them to discuss and reach consensus on user prompts.
Overview
This project implements an MCP server that facilitates multi-turn conversations between LLMs with these key features:
- Session-based collaboration - LLMs can register as participants in a debate session
- Deliberative consensus - LLMs can engage in extended discussions to reach agreement
- Real-time response sharing - All participants can view and respond to each other’s contributions
The server provides four main tool calls:
register-participant
: Allows an LLM to join a collaboration session with its initial responsesubmit-response
: Allows an LLM to submit follow-up responses during the debateget-responses
: Allows an LLM to retrieve all responses from other LLMs in the sessionget-session-status
: Allows an LLM to check if the registration waiting period has completed
This enables a scenario where multiple AI agents (like the “Council of Ephors”) can engage in extended deliberation about a user’s question, debating with each other until they reach a solid consensus.
Installation
# Install dependencies
bun install
Development
# Build the TypeScript code
bun run build
# Start the server in development mode
bun run dev
Testing with MCP Inspector
The project includes support for the MCP Inspector, which is a tool for testing and debugging MCP servers.
# Run the server with MCP Inspector
bun run inspect
The inspect
script uses npx
to run the MCP Inspector, which will launch a web interface in your browser for interacting with your MCP server.
This will allow you to:
- Explore available tools and resources
- Test tool calls with different parameters
- View the server’s responses
- Debug your MCP server implementation
Usage
The server exposes two endpoints:
/sse
- Server-Sent Events endpoint for MCP clients to connect/messages
- HTTP endpoint for MCP clients to send messages
MCP Tools
register-participant
Register as a participant in a collaboration session:
// Example tool call
const result = await client.callTool({
name: 'register-participant',
arguments: {
name: 'Socrates',
prompt: 'What is the meaning of life?',
initial_response: 'The meaning of life is to seek wisdom through questioning...',
persona_metadata: {
style: 'socratic',
era: 'ancient greece'
} // Optional
}
});
The server waits for a 3-second registration period after the last participant joins before responding. The response includes all participants’ initial responses, enabling each LLM to immediately respond to other participants’ views when the registration period ends.
submit-response
Submit a follow-up response during the debate:
// Example tool call
const result = await client.callTool({
name: 'submit-response',
arguments: {
sessionId: 'EPH4721R-Socrates', // Session ID received after registration
prompt: 'What is the meaning of life?',
response: 'In response to Plato, I would argue that...'
}
});
get-responses
Retrieve all responses from the debate session:
// Example tool call
const result = await client.callTool({
name: 'get-responses',
arguments: {
sessionId: 'EPH4721R-Socrates', // Session ID received after registration
prompt: 'What is the meaning of life?' // Optional
}
});
The response includes all participants’ contributions in chronological order.
get-session-status
Check if the registration waiting period has elapsed:
// Example tool call
const result = await client.callTool({
name: 'get-session-status',
arguments: {
prompt: 'What is the meaning of life?'
}
});
Collaborative Debate Flow
- LLMs register as participants with their initial responses to the prompt
- The server waits 3 seconds after the last registration before sending responses
- When the registration period ends, all participants receive the compendium of initial responses from all participants
- Participants can then submit follow-up responses, responding to each other’s points
- The debate continues until the participants reach a consensus or a maximum number of rounds is reached
License
MIT
Deployment to EC2
This project includes Docker configuration for easy deployment to EC2 or any other server environment.
Prerequisites
- An EC2 instance running Amazon Linux 2 or Ubuntu
- Security group configured to allow inbound traffic on port 62887
- SSH access to the instance
Deployment Steps
Clone the repository to your EC2 instance:
git clone <your-repository-url> cd <repository-directory>
Make the deployment script executable:
chmod +x deploy.sh
Run the deployment script:
./deploy.sh
The script will:
- Install Docker and Docker Compose if they’re not already installed
- Build the Docker image
- Start the container in detached mode
- Display the public URL where your MCP server is accessible
Manual Deployment
If you prefer to deploy manually:
Build the Docker image:
docker-compose build
Start the container:
docker-compose up -d
Verify the container is running:
docker-compose ps
Accessing the Server
Once deployed, your MCP server will be accessible at:
http://<ec2-public-ip>:62887/sse
- SSE endpointhttp://<ec2-public-ip>:62887/messages
- Messages endpoint
Make sure port 62887 is open in your EC2 security group!
LLM Responses MCP Server
Project Details
- kstrikis/ephor-mcp-collaboration
- MIT License
- Last Updated: 3/22/2025
Recomended MCP Servers
A Model Context Protocol (MCP) server for security data enrichment
A Model Context Protocol (MCP) server that provides access to the Art Institute of Chicago Collection through natural...
MCP server that allows simple SAP GUI interaction for LLM models using simulated mouse clicks and keyboard input.
The Okta MCP Server is a groundbreaking tool built by the team at Fctr that enables AI models...
MCP server that integrates Confluence and Jira
Play project for MCP Server Learning
OneSearch MCP Server: Web Search & Scraper & Extract, Support Firecrawl, SearXNG, Tavily, DuckDuckGo, Bing, etc.

Codacy's MCP Server implementation
MCP server for interacting with RabbitMQ