@pinkpixel/mindbridge v1.2.0
Mindbridge MCP Server ✨
A Model Context Protocol server that bridges the gap between your applications and various LLM providers. Mindbridge (formerly second-opinion-mcp) provides a unified interface for interacting with multiple AI models, with special support for reasoning capabilities.
Features 🌟
- Multiple LLM Providers: Support for OpenAI, Anthropic, DeepSeek, Google AI, OpenRouter, and local Ollama models
- OpenAI-Compatible Endpoints: Support for third-party services that use the OpenAI API format (Azure OpenAI, Together.ai, Anyscale, Groq, etc.)
- Specialized Reasoning: Access to models specifically optimized for reasoning tasks (OpenAI o-series, Claude 3.7 Sonnet, DeepSeek Reasoner)
- Note on Google Models: Google Gemini models support reasoning internally but don't return the thinking process in their API responses. Gemini 2.5 models are currently not fully supported through the REST API and require the @google/genai library for best results
- Flexible Configuration: Easy setup with environment variables or MCP configuration
- Provider Management: Automatic provider detection and configuration
- Comprehensive Model Support: Wide range of models from each provider
- Error Handling: Robust error handling and response standardization
Installation 🛠️
Option 1: Install from npm (Recommended)
# Install globally
npm install -g @pinkpixel/mindbridge
# use with npx
npx @pinkpixel/mindbridgeOption 2: Install from source
Clone the repository:
git clone https://github.com/pinkpixel-dev/mindbridge.git cd mindbridgeInstall dependencies:
chmod +x install.sh ./install.shConfigure environment variables:
cp .env.example .envEdit
.envand add your API keys for the providers you want to use.
Configuration ⚙️
Environment Variables
The server supports the following environment variables:
OPENAI_API_KEY: Your OpenAI API keyANTHROPIC_API_KEY: Your Anthropic API keyDEEPSEEK_API_KEY: Your DeepSeek API keyGOOGLE_API_KEY: Your Google AI API keyOPENROUTER_API_KEY: Your OpenRouter API keyOLLAMA_BASE_URL: Ollama instance URL (default: http://localhost:11434)OPENAI_COMPATIBLE_API_KEY: (Optional) API key for OpenAI-compatible servicesOPENAI_COMPATIBLE_API_BASE_URL: Base URL for OpenAI-compatible servicesOPENAI_COMPATIBLE_API_MODELS: Comma-separated list of available models
MCP Configuration
For use with MCP-compatible IDEs like Cursor or Windsurf, you can use the following configuration in your mcp.json file:
{
"mcpServers": {
"mindbridge": {
"command": "npx",
"args": [
"-y",
"@pinkpixel/mindbridge@latest"
],
"env": {
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE"
},
"provider_config": {
"openai": {
"default_model": "gpt-4o"
},
"anthropic": {
"default_model": "claude-3-5-sonnet-20241022"
},
"google": {
"default_model": "gemini-2.0-flash"
},
"deepseek": {
"default_model": "deepseek-chat"
},
"openrouter": {
"default_model": "openai/gpt-4o"
},
"ollama": {
"base_url": "http://localhost:11434",
"default_model": "llama3"
},
"openai_compatible": {
"api_key": "API_KEY_HERE_OR_REMOVE_IF_NOT_NEEDED",
"base_url": "FULL_API_URL_HERE",
"available_models": ["MODEL1", "MODEL2"],
"default_model": "MODEL1"
}
},
"default_params": {
"temperature": 0.7,
"reasoning_effort": "medium"
},
"alwaysAllow": [
"getSecondOpinion",
"listProviders",
"listReasoningModels"
]
}
}
}Replace the API keys with your actual keys. For the OpenAI-compatible configuration, you can remove the api_key field if the service doesn't require authentication.
Usage 💫
Starting the Server
Development mode with auto-reload:
npm run devProduction mode:
npm run build
npm startWhen installed globally:
mindbridgeAvailable Tools
getSecondOpinion
{ provider: string; // LLM provider name model: string; // Model identifier prompt: string; // Your question or prompt systemPrompt?: string; // Optional system instructions temperature?: number; // Response randomness (0-1) maxTokens?: number; // Maximum response length reasoning_effort?: 'low' | 'medium' | 'high'; // For reasoning models }listProviders
- Lists all configured providers and their available models
- No parameters required
listReasoningModels
- Lists models optimized for reasoning tasks
- No parameters required
Example Usage 📝
// Get an opinion from GPT-4
{
"provider": "openai",
"model": "gpt-4o",
"prompt": "What are the key considerations for database sharding?",
"temperature": 0.7,
"maxTokens": 1000
}
// Get a reasoned response from OpenAI's o1 model
{
"provider": "openai",
"model": "o1",
"prompt": "Explain the mathematical principles behind database indexing",
"reasoning_effort": "high",
"maxTokens": 4000
}
// Note: Google Gemini models support reasoning internally but don't return the thinking process
// in their API responses. For best results with reasoning, use OpenAI or DeepSeek models.
// Get a reasoned response from DeepSeek
{
"provider": "deepseek",
"model": "deepseek-reasoner",
"prompt": "What are the tradeoffs between microservices and monoliths?",
"reasoning_effort": "high",
"maxTokens": 2000
}
// Use an OpenAI-compatible provider
{
"provider": "openaiCompatible",
"model": "YOUR_MODEL_NAME",
"prompt": "Explain the concept of eventual consistency in distributed systems",
"temperature": 0.5,
"maxTokens": 1500
}Development 🔧
npm run lint: Run ESLintnpm run format: Format code with Prettiernpm run clean: Clean build artifactsnpm run build: Build the project
Contributing 🤝
Contributions are welcome! Please check out our Contributing Guidelines for details.
License 📄
This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ by Pink Pixel