Chain-of-Recursive-Thoughts Server – Overview | MCP Marketplace

✨ From vibe coding to vibe deployment. UBOS MCP turns ideas into infra with one message.

Learn more

Unleash the Power of Recursive AI Thinking with UBOS’s CoRT MCP Server

In the rapidly evolving landscape of Artificial Intelligence, enhancing the reasoning capabilities of Large Language Models (LLMs) is paramount. UBOS.tech proudly presents the Chain-of-Recursive-Thoughts (CoRT) MCP Server, a groundbreaking solution designed to make AI think harder and smarter. This innovative server, now available on the UBOS Asset Marketplace, empowers you to integrate advanced AI reasoning techniques directly into your applications, leveraging the power of recursive thinking to achieve unparalleled results.

What is CoRT and Why Does it Matter?

CoRT, or Chain-of-Recursive-Thoughts, is a methodology that encourages AI to engage in iterative self-argument, leading to more robust and nuanced conclusions. Imagine an AI debating with itself, refining its understanding, and eliminating biases – that’s the essence of CoRT. The original project, developed by PhialsBasement, demonstrated the remarkable effectiveness of this approach.

UBOS’s CoRT MCP Server brings this powerful technique to your fingertips, allowing you to:

  • Improve the accuracy and reliability of AI responses.
  • Uncover deeper insights and solutions through recursive analysis.
  • Reduce biases and errors in AI decision-making.
  • Enhance the overall performance of your AI applications.

Key Features of the UBOS CoRT MCP Server

Our CoRT MCP Server builds upon the original concept with significant enhancements and features designed for seamless integration and optimal performance:

  • Chain-of-Recursive-Thoughts (CoRT) Implementation: The core functionality leverages the CoRT method, enabling AI to think more deeply by repeatedly arguing with itself.
  • MCP Server Integration: As an MCP server, it acts as a bridge, allowing AI models to access and interact with external data sources and tools.
  • Multi-LLM Inference: A major enhancement, this feature allows for each alternative thought in the CoRT process to be generated by a different LLM (model + provider), chosen randomly from a curated list. This maximizes the utilization of diverse knowledge and perspectives from various models.
  • Enhanced Evaluation Prompt: The evaluation prompt used to assess the different alternatives has been significantly enriched, prompting the AI to explain its reasoning and consider multiple perspectives.
  • Flexible Logging Options: Offers both disabled and enabled logging options, with the ability to specify an absolute path for log files, aiding in debugging and monitoring.
  • OpenRouter API Key Integration: Seamlessly integrates with OpenRouter, providing access to a wide range of LLMs and providers.
  • Easy Configuration: Simple JSON-based configuration for quick setup and deployment.
  • Version Control: Easily switch to specific versions of the cort-mcp package to avoid caching issues.

Multi-LLM Inference: A Paradigm Shift in AI Reasoning

One of the most significant advancements in UBOS’s CoRT MCP Server is the Multi-LLM inference capability. This innovative approach addresses a critical limitation of traditional CoRT implementations: reliance on a single LLM.

By randomly selecting a different LLM (model + provider) for each alternative thought within the CoRT process, we unlock a world of possibilities:

  • Diverse Perspectives: Each LLM brings its unique training data, architecture, and biases to the table, resulting in a wider range of potential solutions.
  • Knowledge Maximization: Harness the collective knowledge of multiple models to overcome individual limitations and blind spots.
  • Optimal Solution Selection: The evaluation process considers a more comprehensive set of alternatives, leading to the identification of the truly best response.

This Multi-LLM approach moves beyond the constraints of single-model dependency, allowing you to tap into the full potential of the AI landscape.

How Multi-LLM Inference Works

The process is straightforward yet powerful:

  1. LLM List: The server maintains a list of carefully selected LLMs, chosen for their performance, efficiency, and availability. This list includes models from providers like OpenAI and OpenRouter. The current list includes models such as gpt-4.1-nano, meta-llama/llama-4-scout:free, google/gemini-2.0-flash-exp:free, mistralai/mistral-small-3.1-24b-instruct:free, meta-llama/llama-3.2-3b-instruct:free, and thudm/glm-4-9b:free.
  2. Random Selection: For each alternative thought generated within a CoRT round, the server randomly selects an LLM from the list.
  3. Transparent Logging: The server meticulously logs which model and provider were used for each alternative, ensuring transparency and traceability.
  4. Detailed Response History: In details mode, the response history explicitly includes the model and provider used for each alternative, providing valuable insights into the reasoning process.

Enhanced Evaluation: Delving Deeper into AI Reasoning

Beyond Multi-LLM inference, the UBOS CoRT MCP Server also features a significantly enhanced evaluation prompt. The original prompt focused primarily on accuracy, clarity, and completeness. Our enhanced prompt takes a more holistic approach, guiding the AI to consider:

  • Intent Analysis: What is the user really seeking? What underlying needs might be present beyond the surface question?
  • Context Consideration: What possible situations or backgrounds could this question arise from?
  • Diversity Assessment: Does the response consider different viewpoints or possible interpretations?
  • Practicality Evaluation: How useful would the response be in the user’s real-world context?
  • Consistency Check: Is the response internally consistent and logically coherent?

By prompting the AI to explain its reasoning and consider these broader factors, we ensure that the selected response truly meets the user’s needs and provides the most valuable solution.

Comparing the Original and Enhanced Prompts

Original Prompt:

f"““Original message: {prompt} Evaluate these responses and choose the best one: Current best: {current_best} Alternatives: {chr(10).join([f”{i+1}. {alt}” for i, alt in enumerate(alternatives)])} Which response best addresses the original message? Consider accuracy, clarity, and completeness. First, respond with ONLY ‘current’ or a number (1-{len(alternatives)}). Then on a new line, explain your choice in one sentence.“”"

Enhanced Prompt:

f"“” Original message: {prompt} You are an expert evaluator tasked with selecting the response that best fulfills the user’s true needs, considering multiple perspectives. Current best: {current_best} Alternatives: {chr(10).join([f"{i+1}. {alt}" for i, alt in enumerate(alternatives)])} Please follow this evaluation process: Intent Analysis: What is the user REALLY seeking? What underlying needs might be present beyond the surface question? Context Consideration: What possible situations or backgrounds could this question arise from? Diversity Assessment: Does the response consider different viewpoints or possible interpretations? Practicality Evaluation: How useful would the response be in the user’s real-world context? Consistency Check: Is the response internally consistent and logically coherent? For each response (including the current best): Does it solve the user’s TRUE problem? Does it balance accuracy and usefulness? Does it avoid unnecessary assumptions or biases? Is it flexible enough to apply in various contexts or situations? Does it account for exceptions or special cases? After completing your evaluation: Indicate your choice with ONLY ‘current’ or a number (1-{len(alternatives)}). On the next line, explain specifically why this response best meets the user’s true needs. “”"

The difference is clear: the enhanced prompt guides the AI to perform a more thorough and insightful evaluation, leading to better results.

Use Cases for the CoRT MCP Server

The UBOS CoRT MCP Server is a versatile tool with a wide range of applications:

  • Question Answering: Improve the accuracy and depth of responses to complex questions.
  • Content Generation: Generate more creative, nuanced, and engaging content.
  • Decision Making: Enhance the quality of AI-driven decisions by considering multiple perspectives and potential outcomes.
  • Problem Solving: Tackle challenging problems with AI that can reason through complex scenarios and identify optimal solutions.
  • Code Generation: Generate higher-quality, more robust, and error-free code.

Integrating the CoRT MCP Server into Your Workflow

Integrating the CoRT MCP Server into your existing AI workflows is straightforward:

  1. Configuration: Configure the server using a simple JSON file, specifying the desired logging options and API keys.
  2. Deployment: Deploy the server on your infrastructure, ensuring it has access to the necessary resources and environment variables.
  3. API Calls: Make API calls to the server, providing the input prompt and specifying the desired tool (e.g., {toolname}.simple, {toolname}.details, {toolname}.mixed.llm, {toolname}.neweval).
  4. Response Handling: Process the server’s response, which will contain the final answer or generated content, along with optional details about the reasoning process.

UBOS: Your Full-Stack AI Agent Development Platform

The CoRT MCP Server is just one piece of the puzzle. UBOS is a comprehensive platform designed to empower you to build, orchestrate, and deploy AI Agents across your entire organization.

With UBOS, you can:

  • Connect AI Agents to your enterprise data: Seamlessly integrate your data sources with AI Agents to unlock valuable insights and automate key processes.
  • Build custom AI Agents with your LLM model: Tailor AI Agents to your specific needs and requirements, using your preferred LLMs and custom code.
  • Orchestrate Multi-Agent Systems: Create complex workflows that involve multiple AI Agents working together to achieve common goals.

UBOS is the ultimate platform for bringing the power of AI Agents to every business department.

Getting Started with the CoRT MCP Server

Ready to experience the power of recursive AI thinking? Visit the UBOS Asset Marketplace today and deploy the CoRT MCP Server.

Configuration Examples

Here are some example configurations to get you started:

Logging Disabled

“CoRT-chain-of-recursive-thinking”: { “command”: “pipx”, “args”: [“run”, “cort-mcp”, “–log=off”], “env”: { “OPENAI_API_KEY”: “{apikey}”, “OPENROUTER_API_KEY”: “{apikey}” } }

Logging Enabled

“CoRT-chain-of-recursive-thinking”: { “command”: “pipx”, “args”: [“run”, “cort-mcp”, “–log=on”, “–logfile=/workspace/logs/cort-mcp.log”], “env”: { “OPENAI_API_KEY”: “{apikey}”, “OPENROUTER_API_KEY”: “{apikey}” } }

Important Notes:

  • Remember to set the OPENROUTER_API_KEY environment variable.
  • When logging is enabled, you must provide an absolute path to the log file.

Parameter Specification and Fallback Processing

The API intelligently handles provider and model specification with robust fallback mechanisms:

  • Provider Resolution: If no provider is specified, openrouter is used by default. Invalid provider values also fall back to openrouter.
  • Model Resolution: If no model is specified:
    • For openrouter, the default model is mistralai/mistral-small-3.1-24b-instruct:free.
    • For openai, the default OpenAI model is used.
  • API Call and Error Fallback: If an API call fails and the provider wasn’t openai and OPENAI_API_KEY is set, the system automatically retries with the default OpenAI model.

This fallback mechanism ensures maximum reliability and availability.

License

The CoRT MCP Server is released under the MIT License – Go wild with it!

By leveraging the UBOS CoRT MCP Server, you can unlock the full potential of AI and drive innovation across your organization.

Featured Templates

View More
AI Agents
AI Video Generator
249 1348 5.0
Customer service
Multi-language AI Translator
135 646
AI Assistants
Talk with Claude 3
156 1166
Verified Icon
AI Assistants
Speech to Text
134 1510
Customer service
AI-Powered Product List Manager
147 625

Start your free trial

Build your solution today. No credit card required.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.