GemForge-mcp
GemForge-mcp is a professional Gemini API integration for Claude and MCP-compatible hosts with intelligent model selection and advanced file handling capabilities.
Overview
GemForge-mcp provides a Model Context Protocol (MCP) server that offers specialized tools for interacting with Google’s Gemini AI models. It features intelligent model selection based on task type and content, advanced file handling, and optimized prompts for different use cases.
Installation
# Clone the repository
git clone https://github.com/your-username/GemForge-mcp.git
cd GemForge-mcp
# Install dependencies
npm install
# Build the project
npm run build
Configuration
Create a .env
file in the root directory with the following variables:
GEMINI_API_KEY=your_gemini_api_key_here
DEFAULT_MODEL_ID=gemini-2.5-flash-preview-04-17 # Optional
GEMINI_PAID_TIER=false # Set to 'true' if using paid tier
Running the Server
# Run from source (development)
npm run start
# Run from compiled JavaScript (production)
npm run start:dist
Smithery.ai Deployment
This repository includes the necessary configuration for deploying the GemForge MCP server on smithery.ai.
Smithery Configuration
The smithery.yaml
file contains the configuration needed for smithery deployment:
# Smithery.ai configuration
startCommand:
type: stdio
configSchema:
type: object
properties:
GEMINI_API_KEY:
type: string
description: "Google Gemini API key"
GEMINI_PAID_TIER:
type: boolean
description: "Whether using paid tier (for rate limits)"
default: false
DEFAULT_MODEL_ID:
type: string
description: "Default Gemini model ID to use"
default: "gemini-2.5-flash-preview-04-17"
required:
- GEMINI_API_KEY
# Command function that generates the startup command
commandFunction: |-
(config) => ({
"command": "node",
"args": ["dist/index.js"],
"env": {
"GEMINI_API_KEY": config.GEMINI_API_KEY,
"GEMINI_PAID_TIER": config.GEMINI_PAID_TIER ? "true" : "false",
"DEFAULT_MODEL_ID": config.DEFAULT_MODEL_ID || "gemini-2.5-flash-preview-04-17"
}
})
# Docker configuration
docker:
image: gemforge-mcp:latest
env:
# Environment variables configured through smithery UI
Deployment Steps
Prepare Your Repository:
- Ensure your code is committed and pushed to GitHub
- Verify the
smithery.yaml
file is properly configured
Sign Up for Smithery:
- Create an account at smithery.ai
- Connect your GitHub account to smithery.ai
Create a New Deployment:
- Select “New Tool” or equivalent option
- Choose this repository from your GitHub repositories
- Select the branch you want to deploy (usually
main
ormaster
)
Configure Environment Variables:
- Enter your
GEMINI_API_KEY
in the smithery.ai dashboard - Optionally configure
GEMINI_PAID_TIER
andDEFAULT_MODEL_ID
- Enter your
Deploy:
- Initiate the deployment process
- Smithery will build and deploy your MCP server
Integration:
- Once deployed, smithery will provide integration instructions
- Follow those instructions to connect the MCP server to your AI assistant
Updates and Maintenance
- Push changes to your GitHub repository
- Smithery can be configured to automatically rebuild and deploy on changes
- Monitor your deployment through the smithery.ai dashboard
Docker Deployment
Prerequisites
- Docker installed on your system
- Docker Compose (optional, for easier management)
- A Google Gemini API key
Building the Docker Image
# Using Docker directly
docker build -t gemforge-mcp .
# Using Docker Compose
docker-compose build
Running the Container
# Using Docker directly
docker run -e GEMINI_API_KEY=your_api_key -e GEMINI_PAID_TIER=false -e DEFAULT_MODEL_ID=gemini-2.5-flash-preview-04-17 gemforge-mcp
# Using Docker Compose (after setting variables in .env file)
docker-compose up -d
Docker Image Structure
The Dockerfile uses a multi-stage build process:
Builder Stage:
- Uses Node.js Alpine as the base image
- Installs all dependencies including dev dependencies
- Builds the TypeScript code to JavaScript
Production Stage:
- Uses a clean Node.js Alpine image
- Creates a non-root user for improved security
- Copies only the production dependencies and built code
- Includes a health check for container monitoring
Environment Variables
The Docker container requires the following environment variables:
GEMINI_API_KEY
(required): Your Google Gemini API keyGEMINI_PAID_TIER
(optional): Set totrue
if using paid tier (default:false
)DEFAULT_MODEL_ID
(optional): Default Gemini model ID (default:gemini-2.5-flash-preview-04-17
)
These can be set in the .env
file when using Docker Compose.
Available Tools
GemForge-mcp provides four specialized tools for different AI tasks:
1. gemini_search
Generates responses based on the latest information using Gemini models with Google Search integration.
Input Parameters:
query
(string, required): Your search query or questionfile_path
(string, optional): File path to include with the querymodel_id
(string, optional): Model ID overrideenable_thinking
(boolean, optional): Enable thinking mode for step-by-step reasoning
Example:
{
"toolName": "gemini_search",
"toolParams": {
"query": "What are the latest developments in quantum computing?",
"enable_thinking": true
}
}
2. gemini_reason
Solves complex problems with step-by-step reasoning using advanced Gemini models.
Input Parameters:
problem
(string, required): The complex problem or question to solvefile_path
(string, optional): File path to include with the problemshow_steps
(boolean, optional, default: false): Whether to show detailed reasoning stepsmodel_id
(string, optional): Model ID override
Example:
{
"toolName": "gemini_reason",
"toolParams": {
"problem": "If a rectangle has a perimeter of 30 units and its length is twice its width, what are the dimensions of the rectangle?",
"show_steps": true
}
}
3. gemini_code
Analyzes codebases using Repomix and Gemini models to answer questions about code structure, logic, and potential improvements.
Input Parameters:
question
(string, required): Question about the codebasedirectory_path
(string, optional): Path to the code directorycodebase_path
(string, optional): Path to pre-packed Repomix filerepomix_options
(string, optional): Custom options for the Repomix command (for power users)model_id
(string, optional): Model ID override
Example:
{
"toolName": "gemini_code",
"toolParams": {
"question": "What does this project do?",
"codebase_path": "path/to/codebase.xml"
}
}
Example with custom Repomix options:
{
"toolName": "gemini_code",
"toolParams": {
"question": "Analyze the log files in this directory",
"directory_path": "path/to/logs",
"repomix_options": "--include "**/*.log" --no-gitignore --no-default-patterns"
}
}
4. gemini_fileops
Performs efficient operations on files (text, PDF, images, etc.) using appropriate Gemini models.
Input Parameters:
file_path
(string or array of strings, required): Path to the file(s)instruction
(string, optional): Specific instruction for processingoperation
(string, optional): Specific operation type (summarize
,extract
,analyze
)use_large_context_model
(boolean, optional, default: false): Set true for very large filesmodel_id
(string, optional): Model ID override
Single File Example:
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": "path/to/document.pdf",
"operation": "summarize"
}
}
Multiple Files Example:
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": ["path/to/image1.jpg", "path/to/image2.jpg"],
"operation": "analyze",
"instruction": "Compare these images and describe the differences"
}
}
Important Notes for Multi-File Operations:
Path Format: When passing multiple files as an array, use forward slashes (
/
) in the file paths, even on Windows systems:"file_path": ["C:/Users/Username/Documents/file1.txt", "C:/Users/Username/Documents/file2.txt"]
File Type Consistency: For best results, use files of the same type in multi-file operations (e.g., all images, all text files).
Custom Instructions: When analyzing multiple files, provide a specific
instruction
parameter to guide the comparison or analysis.File Limit: There is a practical limit to how many files can be processed at once, depending on their size and complexity. For large files, consider processing them individually or using
use_large_context_model: true
.Concatenation: When multiple text files are provided, they are concatenated with clear separators before processing.
Model Selection
GemForge-mcp implements intelligent model selection based on:
Task Type:
- Search tasks: Prefers models with search capabilities
- Reasoning tasks: Prefers models with strong reasoning abilities
- Code analysis: Prefers models with code understanding
- File operations: Selects based on file type and size
Available Models:
FAST
:gemini-2.0-flash-lite-001
- Fast, efficient model for simple tasksBALANCED
:gemini-2.0-flash-001
- Balanced model for general-purpose useADVANCED
:gemini-2.5-pro-exp-03-25
- Advanced model for complex reasoningLARGE_CONTEXT
:gemini-1.5-pro-002
- Model for very large context windows
Special Features
- System Instruction Hoisting: Properly handles system instructions for all Gemini models
- XML Content Processing: Efficiently processes XML content for code analysis
- File Type Detection: Automatically detects file types and selects appropriate models
- Rate Limiting Handling: Implements exponential backoff and model fallbacks
- Error Recovery: Provides meaningful error messages and recovery options
- Custom Repomix Options: Allows power users to customize the Repomix command for code analysis, enabling fine-grained control over which files are included or excluded
- Multi-File Processing: Supports analyzing multiple files in a single operation, enabling comparison and transformation analysis
Advanced Usage
Multi-File Analysis with gemini_fileops
The gemini_fileops
tool supports analyzing multiple files in a single operation, which is particularly useful for:
- Comparison Analysis: Compare multiple versions of a document or image
- Transformation Analysis: Analyze changes or progression across a series of files
- Batch Processing: Process multiple related files with a single instruction
Example: Fitness Transformation Analysis
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": [
"C:/Users/Username/Images/fitness2020.jpg",
"C:/Users/Username/Images/fitness2021.jpg",
"C:/Users/Username/Images/fitness2022.jpg"
],
"operation": "analyze",
"instruction": "Analyze these fitness images and provide a detailed fitness transformation analysis. Compare the physique changes across the images, noting any improvements in muscle definition, body composition, and overall fitness level."
}
}
Example: Document Comparison
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": [
"C:/Users/Username/Documents/contract_v1.pdf",
"C:/Users/Username/Documents/contract_v2.pdf"
],
"operation": "extract",
"instruction": "Compare these two contract versions and extract all significant changes between them. Highlight additions, deletions, and modifications."
}
}
Example: Code Evolution Analysis
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": [
"C:/Users/Username/Projects/v1/main.js",
"C:/Users/Username/Projects/v2/main.js"
],
"operation": "analyze",
"instruction": "Analyze how this code has evolved between versions. Identify improvements, new features, bug fixes, and any potential issues introduced."
}
}
Development
Project Structure
GemForge-mcp/
├── src/
│ ├── config/ # Configuration constants
│ ├── handlers/ # Tool handlers
│ ├── interfaces/ # TypeScript interfaces
│ ├── utils/ # Utility functions
│ └── index.ts # Main entry point
├── test/
│ ├── fixtures/ # Test fixtures
│ └── test-*.ts # Test files
├── dist/ # Compiled JavaScript files
├── .env # Environment variables
├── package.json # Project metadata
└── tsconfig.json # TypeScript configuration
Build Scripts
# Build the project
npm run build
# Run in development mode
npm run dev
# Run tests
npm run test
Troubleshooting
Common Issues
Module Not Found Errors:
- Ensure you’ve built the project with
npm run build
- Check that the path to the module is correct
- Ensure you’ve built the project with
API Key Errors:
- Verify your Gemini API key is correctly set in the
.env
file - Check that the API key has the necessary permissions
- Verify your Gemini API key is correctly set in the
Rate Limiting:
- The server implements exponential backoff for rate limiting
- Consider setting
GEMINI_PAID_TIER=true
if you’re on a paid tier
File Processing Issues:
- Ensure file paths are correct and accessible
- Check file permissions
- For large files, use
use_large_context_model: true
- For multi-file operations, use forward slashes (
/
) in file paths, even on Windows - When passing an array of files, ensure the array syntax is correct:
["path/to/file1.txt", "path/to/file2.txt"]
- If files aren’t being loaded properly, try using absolute paths instead of relative paths
Repomix File Inclusion Issues:
- By default, Repomix excludes certain file types (logs, binaries, etc.)
- Use the
repomix_options
parameter to customize file inclusion/exclusion - For log files, try
repomix_options: "--include "**/*.log" --no-gitignore --no-default-patterns"
- For binary files, try
repomix_options: "--include-binary"
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Google Gemini API for providing the underlying AI capabilities
- Model Context Protocol (MCP) for standardizing AI tool interfaces
GemForge
Project Details
- PV-Bhat/GemForge-MCP
- MIT License
- Last Updated: 4/29/2025
Recomended MCP Servers
MCP Markdownify Server with UTF-8 Support - Enhanced version with better multilingual handling
Talk with your notes in Claude. RAG over your Apple Notes using Model Context Protocol.
This read-only MCP Server allows you to connect to Jira data from Claude Desktop through CData JDBC Drivers....
Model Context Protocol Servers
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
A Model Context Protocol (MCP) server for email integration via Nylas. Enables AI assistants to effortlessly batch-triage, organize,...
MCP server for interacting with SQLExpress
A programming framework for agentic AI 🤖 PyPi: autogen-agentchat Discord: https://aka.ms/autogen-discord Office Hour: https://aka.ms/autogen-officehour