MCP vLLM Benchmarking Tool – README | MCP Marketplace

✨ From vibe coding to vibe deployment. UBOS MCP turns ideas into infra with one message.

Learn more

MCP vLLM Benchmarking Tool

This is proof of concept on how to use MCP to interactively benchmark vLLM.

We are not new to benchmarking, read our blog:

Benchmarking vLLM

This is just an exploration of possibilities with MCP.

Usage

  1. Clone the repository
  2. Add it to your MCP servers:
{
    "mcpServers": {
        "mcp-vllm": {
            "command": "uv",
            "args": [
                "run",
                "/Path/TO/mcp-vllm-benchmarking-tool/server.py"
            ]
        }
    }
}

Then you can prompt for example like this:

Do a vllm benchmark for this endpoint: http://10.0.101.39:8888 
benchmark the following model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B 
run the benchmark 3 times with each 32 num prompts, then compare the results, but ignore the first iteration as that is just a warmup.

Todo:

  • Due to some random outputs by vllm it may show that it found some invalid json. I have not really looked into it yet.

Featured Templates

View More
Verified Icon
AI Agents
AI Chatbot Starter Kit
1311 6081 5.0
Customer service
Service ERP
125 756
AI Assistants
Talk with Claude 3
156 1166

Start your free trial

Build your solution today. No credit card required.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.