Frequently Asked Questions (FAQ)
Q: What is the Gemini ➜ OpenAI API proxy? A: It’s a serverless proxy that allows you to use the free (within limits) Gemini API with tools and platforms designed for the OpenAI API.
Q: Why would I use this proxy? A: To leverage the Gemini API’s capabilities with tools that only support the OpenAI API, and to potentially save costs by using Gemini’s free tier.
Q: Is the Gemini API really free? A: Yes, but it has usage limits. Check the Gemini API documentation for details.
Q: What deployment options are available? A: You can deploy to Vercel, Netlify, Cloudflare Workers, Deno, or run it locally with Node, Deno, or Bun.
Q: How do I deploy to Vercel?
A: You can use the “Deploy with Vercel” button or the Vercel CLI (vercel deploy).
Q: What is an API base and how do I configure it?
A: The API base is the URL of your deployed proxy. It should be in the format https://your-proxy-url/v1. You need to configure your OpenAI-compatible tools to use this API base.
Q: What API endpoints are supported?
A: Currently, chat/completions, embeddings, and models are supported. chat/completions has the most comprehensive parameter support.
Q: How do I use web search with Gemini through this proxy?
A: Append :search to the model name (e.g., gemini-2.0-flash:search).
Q: What is UBOS and how does this proxy relate to it? A: UBOS is a full-stack AI Agent Development Platform. This proxy can be used to integrate Gemini into UBOS-based AI Agent workflows.
Q: Can I use vision and audio input with this proxy? A: Yes, vision and audio input are supported as per OpenAI specifications, implemented via inline data.
Q: What if I encounter a 404 Not Found error when opening my deployed site in a browser?
A: This is expected. The API is not designed for direct browser access. You should use it through an OpenAI-compatible tool or application.
OpenAI Gemini Proxy
Project Details
- Uwqn/openai-gemini
- MIT License
- Last Updated: 5/6/2025
Recomended MCP Servers
MCP Server for the Slidespeak API. Create PowerPoint Presentations using MCP.
An integration that allows Claude Desktop to interact with Spotify using the Model Context Protocol (MCP).
Bayesian MCTS Model Context Protocol Server allowing Claude to control Ollama local models for Advanced MCTS and analysis.
Model Context Protocol server for OpenStreetMap data
"Liquidium MCP with Posthog Support"
MCP Server to interact with Google Cloud Tasks
Model Context Protocol (MCP) Server for Graphlit Platform
A Perplexity MCP server based on https://github.com/jaacob/perplexity-mcp which includes additional tools supporting domain filtering, search recency and model...
A Claude desktop MCP server for controlling the Canvas LMS REST API





