selectmodel v1.0.11
Select Model
Select Model is a simple package that evaluates user prompts & outputs a ranked list of ample LLMs to use for the task. By automatically choosing the best model, you're removing a friction point for technial and non-technical users alike.
Features
- 🎯 200-point allocation system for task analysis
- 🤖 Intelligent model recommendations
- 📊 Detailed scoring across multiple dimensions
- ⚡ Fast and efficient analysis
- 🔍 Structured JSON output
- 🛠️ Support for both Gemini (default) and Groq APIs
Core Analysis System
The engine uses a sophisticated 200-point allocation system to analyze tasks and recommend the most suitable AI models. For each input, it:
Distributes exactly 200 points across four key metrics:
- Critical Thought (0-200 points): Higher values for complex, multi-step problems
- Coding/Math Ability (0-200 points): Points allocated based on technical complexity required
- Speed of Response (0-200 points): Prioritized for simpler queries requiring quick turnaround
- Writing/Personality (0-200 points): Points for tasks requiring nuanced communication
Determines if external knowledge is needed:
- Requires Search:
true/false
indicating if the task needs external knowledge
- Requires Search:
Provides ranked model recommendations based on:
- Match between task requirements and model capabilities
- Model strengths and specializations
- Search capability requirements
- Model stability status (production-ready vs experimental)
Installation
npm install selectmodel
Prerequisites
- Node.js >= 18.0.0
- If using only one provider, you only need to supply the API key for that provider:
- For Gemini (the default in auto mode), set
GEMINI_API_KEY
. - For Groq, set
GROQ_API_KEY
. - For OpenAI, set
OPENAI_API_KEY
.
- For Gemini (the default in auto mode), set
- When using the default auto mode (by not specifying a provider), the package will prioritize Gemini if a
GEMINI_API_KEY
is provided; otherwise, it will fall back to Groq or OpenAI depending on which API keys are available.
Quick Start
import { SelectModel } from 'selectmodel';
// Initialize SelectModel with default settings
const selectModel = new SelectModel();
// Get model recommendations for a task
const analysis = await selectModel.analyzeTask(
'Build a real-time chat application with WebSocket support and user authentication'
);
// Response includes both raw analysis and model recommendations
console.log(analysis);
// {
// "raw": {
// "critical_thought": 120,
// "coding_or_math_abillity": 40,
// "speed_of_response": 20,
// "writing_taste_and_personality_ability": 20,
// "requires_search": true
// },
// "recommendedModels": [
// {
// "name": "gpt-4-turbo",
// "provider": "openai",
// "score": 0.92,
// "strengths": ["Critical Thinking", "Coding", "Writing", "Search Capable"]
// },
// // ... more model recommendations
// ],
// "requiresSearch": true
// }
// Or get just the raw analysis
const rawAnalysis = await selectModel.analyzeTask(
'Build a real-time chat application with WebSocket support and user authentication',
{ returnRawAnalysis: true }
);
API Reference
SelectModel
The main class for task analysis and model recommendations.
Constructor Options
interface SelectModelConfig {
// The provider to use for analysis. If omitted or set to 'auto', the package will automatically select the provider based on available API keys, defaulting to Gemini if available.
preferredProvider?: 'groq' | 'gemini' | 'openai' | 'auto'; // Defaults to 'auto'
returnRawAnalysis?: boolean; // Defaults to false
requireStable?: boolean; // When true, only recommends stable (production-ready) models
requireImageSupport?: boolean;
openaiModel?: string; // Optionally specify the OpenAI model to use; defaults to 'gpt-4o-2024-11-20'
}
Configuration options:
preferredProvider
: Determines which API to use for analysis'auto'
(default): Uses Groq if available, falls back to Gemini'groq'
: Uses Groq's llama3-70b-8192 model exclusively'gemini'
: Uses Gemini's gemini-2.0-flash model exclusively'openai'
: Uses OpenAI's model exclusively
returnRawAnalysis
: When true, returns only the point allocations without model recommendationsrequireStable
: When true, only includes stable (production-ready) models in recommendationsrequireImageSupport
: When true, includes models that support image inputsopenaiModel
: Optionally specify the OpenAI model to use; defaults to 'gpt-4o-2024-11-20'
Note: You do not need to pass the provider parameter if you want auto selection. In auto mode, only the API key for the desired provider is needed (e.g., if you want to use OpenAI, you only need to set OPENAI_API_KEY
).
Methods
analyzeTask(input: string): Promise<EnhancedAnalysisResponse | AnalysisResponse>
Analyzes a task and returns either enhanced analysis with model recommendations (default) or raw point allocations.
Returns either:
interface EnhancedAnalysisResponse {
raw: AnalysisResponse;
recommendedModels: Array<{
name: string;
provider: string;
score: number;
strengths: string[];
stable: boolean; // Indicates if the model is production-ready
}>;
requiresSearch: boolean;
}
Or if returnRawAnalysis
is true:
interface AnalysisResponse {
critical_thought: number;
coding_or_math_abillity: number;
speed_of_response: number;
writing_taste_and_personality_ability: number;
requires_search: boolean;
}
Environment Variables
GEMINI_API_KEY
: Your Gemini API key (required if using Gemini, default provider)GROQ_API_KEY
: Your Groq API key (required if using Groq)OPENAI_API_KEY
: Your OpenAI API key (required if using OpenAI)
At least one API key must be provided. If using a specific provider, its corresponding API key must be set.
Development
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Run linting
npm run lint
# Format code
npm run format
License
MIT
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.