✨ From vibe coding to vibe deployment. UBOS MCP turns ideas into infra with one message.

Learn more

UBOS Asset Marketplace: Unleash the Power of MCP Servers for AI Agent Development

In the rapidly evolving landscape of Artificial Intelligence, the ability to create intelligent, context-aware AI Agents is becoming increasingly crucial. UBOS, a full-stack AI Agent Development Platform, empowers businesses to orchestrate AI Agents, connect them with enterprise data, and build custom AI Agents leveraging their own LLM models and Multi-Agent Systems. A key component in achieving this is the Model Context Protocol (MCP), an open standard that revolutionizes how applications provide context to Large Language Models (LLMs).

This document explores the UBOS Asset Marketplace’s offering of MCP Servers, focusing on a specific example: the Gemini Terminal Agent. We will delve into its features, installation process, usage, and how it leverages MCP to provide a powerful and versatile AI interaction experience. By understanding the capabilities of MCP Servers within the UBOS ecosystem, you can unlock new possibilities for AI Agent development and deployment.

Understanding MCP Servers and Their Role in AI Agent Development

The Model Context Protocol (MCP) addresses a fundamental challenge in the development of effective AI Agents: providing LLMs with the necessary context to understand and respond appropriately to user queries. Traditional approaches often involve manually feeding information to the LLM, which can be time-consuming, inefficient, and limit the agent’s ability to access real-time data.

MCP Servers act as a bridge, enabling AI models to access and interact with external data sources and tools. They provide a standardized way for applications to deliver context to LLMs, allowing agents to:

  • Access Real-Time Information: MCP Servers can connect to web search engines, databases, and other data sources, providing agents with up-to-date information to inform their responses.
  • Interact with External Tools: MCP Servers can integrate with various tools and APIs, enabling agents to perform actions such as sending emails, scheduling appointments, or controlling IoT devices.
  • Maintain Conversation History: MCP Servers can store and retrieve conversation history, allowing agents to maintain context throughout a conversation and provide more relevant and personalized responses.
  • Filter and Refine Information: MCP Servers can filter and refine information from external sources, ensuring that the LLM receives only the most relevant and accurate data.

By leveraging MCP Servers, developers can create AI Agents that are more intelligent, versatile, and capable of handling complex tasks.

Gemini Terminal Agent: A Practical Example of MCP Server Implementation

The Gemini Terminal Agent, available on the UBOS Asset Marketplace, provides a compelling example of how MCP Servers can be used to create powerful and user-friendly AI Agents. This agent allows you to interact with Google’s Gemini models directly from your terminal, leveraging real-time web search for up-to-date information.

Use Cases:

  • Research and Information Gathering: Quickly find answers to complex questions by leveraging Gemini’s AI capabilities and real-time web search.
  • Code Generation and Debugging: Generate code snippets, debug existing code, and get assistance with programming tasks directly from your terminal.
  • Content Creation and Summarization: Generate summaries of web pages, create outlines for articles, and get assistance with various content creation tasks.
  • Personal Assistant: Use the agent to set reminders, schedule appointments, and manage your daily tasks.
  • Learning and Exploration: Explore new topics, learn about different subjects, and get personalized recommendations from Gemini.

Key Features:

  • Conversational AI Interface: Engage in natural language conversations with Google’s Gemini models directly from your terminal.
  • Web Search Integration: Access real-time information from the web to inform your queries and responses.
  • Conversation History: Maintain context throughout your conversations, allowing for more relevant and personalized interactions.
  • Advanced Search Options: Filter search results by domains, exclude specific sites, and refine your search queries for more accurate results.
  • Clean, Modular Architecture: The agent features a well-structured codebase that’s easy to extend and customize.

Installation and Setup of the Gemini Terminal Agent

To get started with the Gemini Terminal Agent, follow these steps:

  1. Prerequisites:

    • Python 3.9+
    • Google API key for Gemini models
    • Google Custom Search Engine (CSE) API key and ID
  2. Clone the Repository: bash git clone https://github.com/yourusername/gemini-terminal-agent.git cd gemini-terminal-agent

  3. Create a Virtual Environment (Recommended): bash python -m venv venv source venv/bin/activate # On Windows: venvScriptsactivate

  4. Install Dependencies: bash pip install -r requirements.txt

  5. Create a .env File: Create a .env file in the project root with your API keys:

    GOOGLE_GENAI_API_KEY=your_gemini_api_key_here SEARCH_ENGINE_API_KEY=your_google_api_key_here SEARCH_ENGINE_CSE_ID=your_cse_id_here DEFAULT_MODEL=gemini-2.5-flash-preview-04-17

Setting Up Google Search Engine:

To use the web search functionality, you need to set up a Google Custom Search Engine:

  1. Get a Google API Key:
    • Go to Google Cloud Console
    • Create a new project or select an existing one
    • Navigate to “APIs & Services” > “Library”
    • Search for “Custom Search API” and enable it
    • Go to “APIs & Services” > “Credentials”
    • Create an API key and copy it (this will be your SEARCH_ENGINE_API_KEY)
  2. Create a Custom Search Engine:
    • Go to Programmable Search Engine
    • Click “Create a Programmable Search Engine”
    • Add sites to search (use *.com to search the entire web)
    • Give your search engine a name
    • In “Customize” > “Basics”, enable “Search the entire web”
    • Get your Search Engine ID from the “Setup” > “Basics” page (this will be your SEARCH_ENGINE_CSE_ID)
  3. Get a Gemini API Key:
    • Go to Google AI Studio
    • Sign in with your Google account
    • Go to “API Keys” and create a new API key
    • Copy the API key (this will be your GOOGLE_GENAI_API_KEY)

Usage and Commands

To run the agent, execute the following command in your terminal:

bash python main.py

Once the agent is running, you can interact with it using the following commands:

  • Type your question or prompt: Interact with the agent by typing your queries in natural language.
  • help: Display available tools and commands.
  • clear: Clear the conversation history.
  • exit, quit, or q: Exit the program.

Example Queries:

What is the capital of France? Paris is the capital of France. It is located in the north-central part of the country on the Seine River.

search for recent developments in quantum computing Searching the web for recent developments in quantum computing… [Agent response with up-to-date information]

help 🔍 Available Tools:

  • search: Search for information online based on a query
  • advanced_search: Perform an advanced search with domain filtering and time range options

⌨️ Terminal Commands:

  • help: Show this help message
  • clear: Clear conversation history
  • exit/quit/q: Exit the program

Project Structure

The Gemini Terminal Agent project is organized into the following directories:

gemini-terminal-agent/ │ ├── main.py # Main entry point ├── search_server.py # Search server entry point ├── .env # Environment variables (not versioned) │ ├── agent/ # Agent implementation │ ├── init.py │ ├── terminal_agent.py # Core agent implementation │ └── config.py # Agent configuration │ ├── search/ # Search functionality │ ├── init.py │ ├── server.py # MCP search server │ ├── engine.py # Search engine implementation │ └── content.py # Web content extraction │ └── utils/ # Shared utilities ├── init.py ├── config.py # Global configuration └── logging.py # Logging setup

Advanced Configuration

You can further customize the agent’s behavior by modifying settings in your .env file. Some of the key settings include:

  • DEFAULT_MODEL: Specifies the Gemini model to use (e.g., gemini-2.5-flash-preview-04-17, gemini-1.5-pro, gemini-1.5-flash).
  • MAX_CONCURRENT_REQUESTS: Sets the maximum number of concurrent web search requests.
  • CONNECTION_TIMEOUT: Defines the connection timeout for web search requests.
  • CONTENT_TIMEOUT: Sets the content timeout for web search requests.
  • MAX_CONTENT_LENGTH: Specifies the maximum length of web content to extract.
  • CACHE_TTL: Defines the cache time-to-live for web search results.

Contributing to the Project

Contributions to the Gemini Terminal Agent project are welcome! If you have any suggestions, bug fixes, or new features, please feel free to submit a Pull Request.

  1. Fork the repository.
  2. Create your feature branch (git checkout -b feature/amazing-feature).
  3. Commit your changes (git commit -m 'Add some amazing feature').
  4. Push to the branch (git push origin feature/amazing-feature).
  5. Open a Pull Request.

UBOS: Your Partner in AI Agent Development

The Gemini Terminal Agent is just one example of the many powerful tools and resources available on the UBOS Asset Marketplace. UBOS provides a comprehensive platform for developing, deploying, and managing AI Agents, empowering businesses to leverage the full potential of AI.

With UBOS, you can:

  • Orchestrate AI Agents: Easily manage and coordinate multiple AI Agents to perform complex tasks.
  • Connect Agents with Enterprise Data: Integrate AI Agents with your existing data sources to provide them with the context they need to make informed decisions.
  • Build Custom AI Agents: Create custom AI Agents tailored to your specific business needs.
  • Leverage Multi-Agent Systems: Develop sophisticated AI systems that combine the strengths of multiple agents.

By leveraging the UBOS platform and the resources available on the Asset Marketplace, you can accelerate your AI Agent development efforts and unlock new opportunities for innovation.

Conclusion

The integration of MCP Servers with AI Agents, as exemplified by the Gemini Terminal Agent, represents a significant step forward in the field of Artificial Intelligence. By providing LLMs with access to real-time information, external tools, and conversation history, MCP Servers enable the creation of more intelligent, versatile, and capable AI Agents.

The UBOS Asset Marketplace offers a wide range of MCP Servers and other AI-related resources, empowering businesses to develop and deploy cutting-edge AI solutions. Whether you are looking to automate tasks, improve customer service, or gain insights from data, UBOS provides the tools and resources you need to succeed in the age of AI.

Featured Templates

View More
Verified Icon
AI Agents
AI Chatbot Starter Kit
1336 8300 5.0
AI Characters
Sarcastic AI Chat Bot
129 1713
AI Agents
AI Video Generator
252 2007 5.0
AI Assistants
Talk with Claude 3
159 1523

Start your free trial

Build your solution today. No credit card required.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.