0.1.1 • Published 7 months ago
@bluefly/llm v0.1.1
BFLLM - LLM Service Platform
Status: ✅ PRODUCTION-READY - 27 passing tests, multi-provider LLM client operational
Focus: TypeScript wrapper for LLM APIs with Drupal integration
A TypeScript client library providing unified interface for multiple LLM providers, designed for integration with the Drupal LLM module.
Current Status
What Works
- ✅ 27 passing tests - Comprehensive test coverage confirmed
- ✅ Production-ready multi-provider LLM client
- ✅ Ollama provider integration (default)
- ✅ OpenAI provider integration
- ✅ Anthropic provider integration
- ✅ Model switching based on model name patterns
- ✅ TypeScript type safety with full definitions
- ✅ Robust error handling and retry mechanisms
- ✅ Optimized HTTP requests to LLM APIs
- ✅ Real implementations (not mocks or stubs)
What's In Development
- Advanced model management
- Streaming support
- Intelligent provider selection
- Enhanced observability
- Enterprise features
Architecture Notes
- Client library focused (intentional design choice)
- Model lists instead of complex registry (simplicity by design)
- Pattern-based provider selection (reliable and maintainable)
- Core HTTP functionality prioritized over streaming
- Production-focused logging and error handling
- Enterprise features available through Drupal integration
Installation
# Clone the repository
git clone <repository>
cd bfllm
# Install dependencies
npm install
# Build the project
npm run build
# Start the service
npm run startOllama Configuration
BFLLM uses Ollama as the default LLM provider for privacy and cost-effectiveness.
Setting up Ollama
- Install Ollama from https://ollama.ai
- Start the Ollama service:
ollama serve - Pull models you want to use:
ollama pull llama2 ollama pull mistral ollama pull codellama
Environment Variables
# Optional: Change Ollama server URL (default: http://localhost:11434)
export OLLAMA_BASE_URL=http://localhost:11434
# For cloud providers (when needed)
export OPENAI_API_KEY=your-api-key
export ANTHROPIC_API_KEY=your-api-keyBasic Usage
Client Usage
import { LLMClient } from '@bluefly/llm';
// Default usage with Ollama (no API key needed)
const client = new LLMClient({
model: 'llama2' // Defaults to Ollama provider
});
const response = await client.generateText('Hello, world!');
console.log(response.text);
// Using other providers
const openaiClient = new LLMClient({
model: 'gpt-3.5-turbo',
apiKey: process.env.OPENAI_API_KEY
});
const anthropicClient = new LLMClient({
model: 'claude-3-haiku',
apiKey: process.env.ANTHROPIC_API_KEY
});API Server
Start the API server:
npm run start:serverAPI Endpoints:
POST /api/v1/generate- Generate text completionPOST /api/v1/embeddings- Generate embeddingsGET /api/v1/models- List available modelsGET /health- Health check
Architecture
bfllm/
├── src/
│ ├── clients/ # External service clients
│ ├── models/ # Model definitions
│ ├── providers/ # LLM provider integrations
│ └── services/ # Core services
├── examples/ # Usage examples
└── tests/ # Test suitesFeatures
Model Management
- Dynamic model loading and switching
- Model registry with metadata
- Version control for models
- Performance metrics tracking
Inference Optimization
- Request batching
- Response caching
- Token usage optimization
- Fallback strategies
Observability
- Structured logging
- Metrics collection
- Distributed tracing
- Error tracking
Security
- API key management
- Rate limiting
- Input validation
- Output sanitization
Configuration
Create a .env file:
# LLM Providers
OPENAI_API_KEY=your-api-key
ANTHROPIC_API_KEY=your-api-key
# Server Configuration
PORT=3000
NODE_ENV=production
# Observability
LOG_LEVEL=info
ENABLE_METRICS=true
# Model Settings
DEFAULT_MODEL=gpt-3.5-turbo
MAX_TOKENS=2048
TEMPERATURE=0.7Development
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Start development server
npm run devTesting
# Run all tests
npm test
# Run with coverage
npm run test:coverage
# Run specific test suite
npm test -- unitContributing
- Follow TypeScript best practices
- Add tests for new features
- Update documentation
- Use conventional commit messages
- Ensure all tests pass before pushing
Known Issues
- Limited model management capabilities
- No streaming support
- Basic error handling
- Limited observability
- No enterprise features
License
MIT
Last Updated: June 2025