Frequently Asked Questions (FAQ) - Resemble AI Voice Generation MCP Server
Q: What is the Resemble AI Voice Generation MCP Server? A: It’s a server implementation that bridges Resemble AI’s voice generation API with platforms like Claude and Cursor, using the Model Context Protocol (MCP) to enable AI agents to generate realistic voice audio from text.
Q: What is MCP? A: MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs, allowing AI models to access and interact with external data sources and tools.
Q: What are the main features of this server? A: Key features include generating voice audio from text, listing available voice models, supporting multiple connection methods (SSE and StdIO), and providing flexible audio output options (file or base64).
Q: What connection methods does the server support? A: The server supports SSE Transport (Network-based Server-Sent Events) and StdIO Transport (Direct process communication).
Q: What are the prerequisites for using the server? A: You need Python 3.10 or higher and a Resemble AI API key.
Q: How do I set up the Resemble AI API key?
A: You can set it as an environment variable (RESEMBLE_API_KEY) or create a .env file in the project root with the same variable.
Q: How do I run the server?
A: You can use the run_server.sh script or the CLI directly (python -m src.cli). Choose the implementation and port as needed.
Q: How do I connect to Claude Desktop?
A: Create a claude_desktop_config.json file with the appropriate configuration for SSE or StdIO transport.
Q: How do I connect to Cursor? A: Go to Settings → AI → MCP Servers in Cursor and add a new server, selecting either SSE or Subprocess (StdIO) as the connection type, and configure the URL or command accordingly.
Q: What tools are available?
A: The server provides the list_voices tool (to list available voice models) and the generate_tts tool (to generate voice audio from text).
Q: What are the parameters for the generate_tts tool?
A: Required parameters are text (the text to convert) and voice_id (the ID of the voice to use). Optional parameters are return_type (‘file’ or ‘base64’, default: ‘file’) and output_filename (optional).
Q: What implementations are included in the project?
A: The project includes resemble_mcp_server.py (MCP SDK with SSE), resemble_stdio_server.py (StdIO transport), resemble_http_server.py (HTTP with SSE), resemble_ai_server.py (Direct API), and resemble_ai_sdk_server.py (Resemble SDK).
Q: What should I do if I encounter MCP SDK import errors? A: The server will automatically fall back to the HTTP implementation with SSE transport.
Q: What should I check if Claude or Cursor cannot connect to the server? A: Verify that the server is running, the correct URL is configured, your API key is valid, and check the server logs for errors.
Q: When should I use SSE Transport vs. StdIO Transport? A: Use SSE Transport when you want to run the server separately, or on a different machine. Use StdIO Transport when you want Claude/Cursor to manage the server process for you.
Q: What Python version is required? A: Python 3.10 or higher is required.
Q: Where can I find example usage?
A: Examples can be found in the examples/ directory.
Q: Where is the generated audio output stored?
A: By default, audio files are stored in the output/ directory. This can be configured in the .env file.
Q: What if I get an API connection error?
A: Ensure you are using the correct API endpoint (https://app.resemble.ai/api/v2/) and that your API key is valid.
Q: Do I need a project in my Resemble AI account? A: Yes, the API requires at least one project in your Resemble account. Create one through the Resemble AI dashboard if needed.
Q: What do I do if Cursor fails to connect via SSE?
A: Ensure the server is running on the specified port, you’re using the correct /sse endpoint, no firewall is blocking the connection, and try restarting both the server and Cursor.
Resemble AI Voice Generation
Project Details
- obaid/resemble-mcp
- Last Updated: 3/7/2025
Recomended MCP Servers
Official Model Context Protocol server for Gyazo
Claude's OmniFocus integration: Let LLMs interface with your tasks through the Model Context Protocol. Add, organize, and query...
Manage Websets in Claude | Exa Websets with MCP (Model Context Protocol)
A model context protocol server to work with JetBrains IDEs: IntelliJ, PyCharm, WebStorm, etc. Also, works with Android...
SingleStore MCP server implemented in TS
The Opera Omnia MCP server provides programmatic access to the rich collection of JSON datasets from the Opera...
Portainer MCP server
Android MCP Server implementation
Debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and...





