UBOS Asset Marketplace: Gemini API Proxy for OpenAI – Unlock the Power of Serverless AI
In the rapidly evolving landscape of Artificial Intelligence, interoperability and cost-effectiveness are paramount. UBOS introduces a game-changing asset in its marketplace: an MCP (Model Context Protocol) Server that acts as a seamless proxy between the Gemini API and the OpenAI API. This innovative solution allows users to leverage the free (within limits) Gemini API with tools and platforms exclusively designed for the OpenAI API, without the complexities of server management. This is facilitated through the UBOS Asset Marketplace, a central hub for AI-related tools and services, focusing on MCP Servers to enhance AI applications.
What is an MCP Server?
At its core, an MCP Server is a bridge that standardizes how applications provide context to Large Language Models (LLMs). It enables AI models to interact with external data sources and tools, enriching their understanding and capabilities. The Gemini ➜ OpenAI API proxy exemplifies this by allowing applications built for the OpenAI ecosystem to harness the power of Google’s Gemini models.
Key Benefits and Features
- Serverless Architecture: Deploy and run your proxy in the cloud without the need for server maintenance. This significantly reduces operational overhead and allows you to focus on building and deploying AI applications.
- Cost-Effectiveness: The Gemini API is free (subject to usage limits). This proxy allows you to take advantage of this free resource while still using your favorite OpenAI-compatible tools.
- Easy Deployment: Deploy with one-click integrations to Vercel, Netlify, and Cloudflare. Alternatively, use CLI tools for more advanced configuration.
- Local Development Support: Develop and test your applications locally with Node, Deno, or Bun.
- Flexible API Base: Choose between different API bases (e.g.,
/v1or/edge/v1for Netlify) to optimize for your specific needs. - Model Compatibility: The proxy intelligently handles model names, defaulting to
gemini-2.0-flashfor chat completions andtext-embedding-004for embeddings if the specified model doesn’t match the Gemini naming convention. - Built-in Tools: Utilize built-in tools like web search by appending
:searchto the model name (e.g.,gemini-2.0-flash:search). - Media Support: Supports vision and audio input as per OpenAI specifications, implemented via inline data.
- Comprehensive API Endpoint Support: Supports a wide range of OpenAI API endpoints, including
chat/completions,embeddings, andmodels, with extensive parameter compatibility.
Use Cases
This Gemini ➜ OpenAI API proxy unlocks a plethora of use cases across various industries:
- AI-Powered Chatbots: Integrate Gemini’s powerful language models into your existing chatbot platforms designed for OpenAI, enhancing customer support and engagement.
- Content Creation: Utilize Gemini for generating high-quality content with tools optimized for OpenAI’s content creation APIs.
- Data Analysis and Insights: Leverage Gemini’s advanced analytical capabilities with data analysis tools built for the OpenAI ecosystem.
- Educational Applications: Develop interactive learning experiences powered by Gemini using educational platforms designed around the OpenAI API.
- Research and Development: Experiment with Gemini’s capabilities in research projects that leverage existing OpenAI-compatible research tools.
Deployment Options: A Detailed Guide
The MCP Server offers multiple deployment options to cater to different user preferences and technical expertise.
1. Vercel:
Vercel provides a seamless deployment experience with its intuitive interface and powerful CLI tools.
- Button Deploy: Click the “Deploy with Vercel” button to initiate the deployment process. This will guide you through forking the repository, which is essential for continuous integration.
- CLI Deployment: Use the Vercel CLI (
vercel deploy) for more control over the deployment process. - Local Development: Serve the proxy locally using
vercel dev.
Important Considerations: Be mindful of Vercel Functions limitations, especially when using the Edge runtime.
2. Netlify:
Netlify offers another excellent platform for serverless deployment with its drag-and-drop interface and robust CLI tools.
- Button Deploy: Click the “Deploy to Netlify” button to start the deployment process.
- CLI Deployment: Use the Netlify CLI (
netlify deploy) for more advanced deployment options. - Local Development: Serve the proxy locally using
netlify dev.
API Base Options: Netlify provides two API bases: /v1 (for Functions) and /edge/v1 (for Edge Functions). Choose the appropriate base based on your performance and latency requirements.
Important Considerations: Pay attention to the limitations of Netlify Functions and Edge Functions.
3. Cloudflare Workers:
Cloudflare Workers provide a highly scalable and globally distributed platform for deploying serverless applications.
- Button Deploy: Click the “Deploy to Cloudflare Workers” button to initiate the deployment process.
- Manual Deployment: Copy the content of
src/worker.mjsto the Cloudflare Workers playground and deploy from there. - CLI Deployment: Use the Wrangler CLI (
wrangler deploy) for more advanced deployment options. - Local Development: Serve the proxy locally using
wrangler dev.
Important Considerations: Be aware of the Cloudflare Worker limits.
4. Deno:
Deno is a modern runtime for JavaScript and TypeScript that offers a streamlined development experience. Refer to the project’s GitHub discussions for detailed instructions on deploying with Deno.
5. Local Server (Node, Deno, Bun):
For development purposes, you can easily run the proxy locally using Node, Deno, or Bun.
- Node:
npm installfollowed bynpm run start. - Deno:
npm run start:deno. - Bun:
npm run start:bun.
Dev Mode: For real-time updates as you modify the source code, use the dev mode:
- Node:
npm install --include=devfollowed bynpm run dev. - Deno:
npm run dev:deno. - Bun:
npm run dev:bun.
Integrating the Proxy into Your Workflow
After deploying the proxy, integrate it into your applications by specifying the API address and your Gemini API key in the relevant settings.
- API Base Format: The API base should typically be in the format
https://my-super-proxy.vercel.app/v1. - Configuration Settings: Look for fields labeled as “OpenAI proxy” or similar, often found under “Advanced settings” or in configuration files.
- Environment Variables: For command-line tools, set the
OPENAI_BASE_URLorOPENAI_API_BASEenvironment variable.
Supported API Endpoints and Parameters
The proxy strives to support a comprehensive set of OpenAI API endpoints and parameters. Here’s a breakdown of the current support:
chat/completions: Most parameters applicable to both APIs are implemented, includingmessages,model,frequency_penalty,max_tokens,n,presence_penalty,response_format,seed,stop,stream,temperature,top_p,tools, andtool_choice.embeddings: Fully supported.models: Fully supported.
Leveraging the UBOS Platform
The UBOS platform is a full-stack AI Agent Development Platform focused on bringing AI Agents to every business department. By integrating this Gemini ➜ OpenAI API proxy with UBOS, you can:
- Orchestrate AI Agents: Seamlessly integrate Gemini-powered agents into your existing UBOS workflows.
- Connect to Enterprise Data: Enhance your AI Agents with access to your enterprise data sources, providing them with the context they need to perform effectively.
- Build Custom AI Agents: Create custom AI Agents powered by Gemini, tailored to your specific business needs.
- Develop Multi-Agent Systems: Build complex multi-agent systems that leverage the strengths of both Gemini and other AI models.
Conclusion
The Gemini ➜ OpenAI API proxy is a powerful tool that unlocks the potential of serverless AI, cost-effective access to Gemini’s capabilities, and seamless integration with existing OpenAI-compatible tools. By leveraging this asset in the UBOS marketplace, you can accelerate your AI development efforts and drive innovation across your organization. Embrace the future of AI with UBOS and the Gemini ➜ OpenAI API proxy.
OpenAI Gemini Proxy
Project Details
- Uwqn/openai-gemini
- MIT License
- Last Updated: 5/6/2025
Recomended MCP Servers
Multi-tenant service that allows MCP Clients to connect to Integration App's MCP Server
FalkorDB MCP Server
My MCP Server POC
tensorflow implementation
MCP server for Unreal Engine 5
A flexible HTTP fetching Model Context Protocol server.
A Model Context Protocol (MCP) server enabling AI assistants to interact with Outline documentation services.
council of models for decision
Java implementation of MCP Server for Craw4ai





