0.1.0 β’ Published 5 months ago
ai-switcher v0.1.0
AI Client Library
A unified TypeScript/JavaScript client library for seamlessly switching between AI providers. Configure once, switch providers anytime without changing code.
Features
- π Easy provider switching via configuration
- π― Unified interface across providers
- π οΈ Provider-specific parameter handling
- βοΈ Configurable defaults
- π Type-safe API
- π Promise-based async/await interface
Installation
npm install ai-switcher
Supported Providers & Models
OpenAI
- Models:
- gpt-4
- gpt-4-turbo-preview
- gpt-3.5-turbo
- gpt-3.5-turbo-0125
- Features:
- Chat completions
- Embeddings
- JSON mode
Anthropic
- Models:
- claude-3-opus-20240229
- claude-3-sonnet-20240229
- claude-3-haiku-20240307
- Features:
- Chat completions
- System prompts
Quick Start
import { AIClient } from 'ai-switcher';
// Initialize with your API keys
const client = new AIClient({
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
openaiApiKey: process.env.OPENAI_API_KEY,
defaultProvider: 'anthropic', // Optional: set a default provider
defaultModel: 'claude-3-haiku-20240307' // Optional: set a default model
});
// Use the client
const response = await client.createCompletion([
{ role: 'user', content: 'Hello!' }
]);
Configuration-Based Model Switching
1. Define Configuration Profiles
const CONFIG = {
development: {
provider: 'openai' as const,
model: 'gpt-3.5-turbo',
temperature: 0.7
},
production: {
provider: 'anthropic' as const,
model: 'claude-3-opus-20240229',
temperature: 0.5
},
lowCost: {
provider: 'openai' as const,
model: 'gpt-3.5-turbo',
temperature: 0.7
},
highQuality: {
provider: 'anthropic' as const,
model: 'claude-3-opus-20240229',
temperature: 0.7
}
};
// Use configuration
const response = await client.createCompletion(
[{ role: 'user', content: 'Hello!' }],
CONFIG.production
);
2. Task-Based Configuration
const TASK_CONFIGS = {
creativity: {
provider: 'anthropic',
model: 'claude-3-opus-20240229',
temperature: 0.9
},
analysis: {
provider: 'openai',
model: 'gpt-4',
temperature: 0.2
},
quick: {
provider: 'openai',
model: 'gpt-3.5-turbo',
temperature: 0.7
}
};
// Use based on task
const response = await client.createCompletion(
[{ role: 'user', content: 'Analyze this data...' }],
TASK_CONFIGS.analysis
);
Example: Switching Models with Configuration Only
One of the best features of this library is that you can switch between AI providers and models simply by updating your configuration, without changing any code. For example, in your project configuration you might have:
const CONFIG = {
development: {
provider: 'openai' as const,
model: 'gpt-3.5-turbo',
temperature: 0.7
},
production: {
provider: 'anthropic' as const,
model: 'claude-3-opus-20240229',
temperature: 0.5
}
};
And in your code, you use the same client instance:
import { AIClient } from 'ai-client-lib';
const client = new AIClient({
openaiApiKey: process.env.OPENAI_API_KEY,
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
defaultProvider: CONFIG.development.provider,
defaultModel: CONFIG.development.model,
});
When youβre ready to switch environments, just update your config without modifying any code:
const currentConfig = process.env.NODE_ENV === 'production'
? CONFIG.production
: CONFIG.development;
const client = new AIClient({
openaiApiKey: process.env.OPENAI_API_KEY,
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
defaultProvider: currentConfig.provider,
defaultModel: currentConfig.model,
});
Now, your client will automatically use the correct provider and model based on your configuration!
Common Parameters
These parameters work across all providers:
Parameter | Type | Description | Default |
---|---|---|---|
provider | 'openai' | 'anthropic' | AI provider to use | defaultProvider |
model | string | Model identifier | defaultModel |
temperature | number | Response randomness (0-1) | 0.7 |
maxTokens | number | Maximum response length | 4096 |
responseFormat | 'text' | 'json' | Response format | 'text' |
Advanced Usage
System Messages
const messages = [
{
role: 'system',
content: 'You are a helpful assistant that speaks in rhyme.'
},
{
role: 'user',
content: 'Tell me about the weather.'
}
];
const response = await client.createCompletion(messages, CONFIG.production);
Error Handling with Fallbacks
const withFallback = async (messages) => {
try {
return await client.createCompletion(messages, CONFIG.production);
} catch (error) {
console.log('Falling back to development config...');
return await client.createCompletion(messages, CONFIG.development);
}
};
Environment-Based Configuration
const ENV = process.env.NODE_ENV || 'development';
const ENV_CONFIG = {
development: {
provider: 'openai',
model: 'gpt-3.5-turbo',
temperature: 0.7
},
staging: {
provider: 'anthropic',
model: 'claude-3-haiku-20240307',
temperature: 0.7
},
production: {
provider: 'anthropic',
model: 'claude-3-opus-20240229',
temperature: 0.5
}
};
const response = await client.createCompletion(
messages,
ENV_CONFIG[ENV]
);
Cost-Optimized Usage
const COST_CONFIGS = {
cheap: {
provider: 'openai',
model: 'gpt-3.5-turbo',
maxTokens: 256
},
balanced: {
provider: 'anthropic',
model: 'claude-3-haiku-20240307',
maxTokens: 1024
},
premium: {
provider: 'anthropic',
model: 'claude-3-opus-20240229',
maxTokens: 4096
}
};
const getCostConfig = (taskComplexity: 'low' | 'medium' | 'high') => {
switch (taskComplexity) {
case 'low': return COST_CONFIGS.cheap;
case 'medium': return COST_CONFIGS.balanced;
case 'high': return COST_CONFIGS.premium;
}
};
const response = await client.createCompletion(
messages,
getCostConfig('medium')
);
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT
0.1.0
5 months ago