1.0.1 • Published 6 months ago
@orchard9ai/comm9-api-client v1.0.1
@orchard9ai/comm9-api-client
TypeScript client for comm9 LLM routing service with full OpenAI compatibility and provider routing.
Features
- 🔄 Provider Routing - Route requests to specific LLM providers (Ollama, vLLM)
- 🚀 OpenAI Compatible - Drop-in replacement for OpenAI API clients
- 📡 Streaming Support - Real-time chat completions with Server-Sent Events
- 🔧 TypeScript First - Fully typed with auto-generated types from OpenAPI spec
- ⚛️ React Query Ready - Built-in hooks for React applications
- 🌐 Universal - Works in Node.js and browser environments
Installation
npm install @orchard9ai/comm9-api-clientFor React applications:
npm install @orchard9ai/comm9-api-client @tanstack/react-queryQuick Start
Configuration
import { configure } from '@orchard9ai/comm9-api-client';
// Configure the client
configure({
baseURL: 'https://your-comm9-instance.com',
auth: { type: 'bearer', token: 'your-jwt-token' },
defaultProvider: 'ollama',
});Basic Usage
import { createChatCompletion, listModels } from '@orchard9ai/comm9-api-client';
// Chat completion
const response = await createChatCompletion({
model: 'llama3.2',
provider: 'ollama', // comm9 extension
messages: [
{ role: 'user', content: 'Hello!' }
],
max_tokens: 100,
});
// List available models
const models = await listModels();
console.log(models.data); // Array of available modelsReact Usage
import { useCreateChatCompletion, useListModels } from '@orchard9ai/comm9-api-client';
function ChatComponent() {
const { mutate: sendMessage, data, isLoading } = useCreateChatCompletion();
const { data: models } = useListModels();
const handleSend = () => {
sendMessage({
data: {
model: 'llama3.2',
provider: 'ollama',
messages: [{ role: 'user', content: 'Hello!' }],
}
});
};
return (
<div>
<button onClick={handleSend} disabled={isLoading}>
Send Message
</button>
{data && <p>{data.choices[0].message.content}</p>}
</div>
);
}Streaming
import { createStreamingClient } from '@orchard9ai/comm9-api-client';
const streamingClient = createStreamingClient();
// Stream chat completion
const stream = streamingClient.chatCompletionStream({
model: 'llama3.2',
provider: 'ollama',
messages: [{ role: 'user', content: 'Tell me a story' }],
});
for await (const chunk of stream) {
console.log(chunk.choices[0].delta.content);
}API Reference
Chat Completions
await createChatCompletion({
model: 'llama3.2',
provider: 'ollama', // optional: 'ollama' | 'vllm'
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
],
max_tokens: 100,
temperature: 0.7,
stream: false, // set to true for streaming
});Models
// List all models
const allModels = await listModels();
// List models from specific provider
const ollamaModels = await listModels({ provider: 'ollama' });Embeddings
const embeddings = await createEmbedding({
model: 'nomic-embed-text',
provider: 'ollama',
input: 'Text to embed',
});Provider Routing
comm9 extends the OpenAI API with a provider field to route requests to specific LLM providers:
// Route to Ollama
await createChatCompletion({
model: 'llama3.2',
provider: 'ollama',
messages: [{ role: 'user', content: 'Hello' }],
});
// Route to vLLM
await createChatCompletion({
model: 'microsoft/Phi-4-mini-reasoning',
provider: 'vllm',
messages: [{ role: 'user', content: 'Hello' }],
});
// Auto-route (uses first healthy provider)
await createChatCompletion({
model: 'llama3.2',
// provider omitted - comm9 will auto-route
messages: [{ role: 'user', content: 'Hello' }],
});Authentication
JWT Tokens
import { setDefaultAuth } from '@orchard9ai/comm9-api-client';
setDefaultAuth({ type: 'bearer', token: 'your-jwt-token' });API Keys
setDefaultAuth({ type: 'apikey', token: 'your-api-key' });Environment Variables
The client automatically uses these environment variables:
COMM9_API_URL=https://your-comm9-instance.com
COMM9_JWT_TOKEN=your-jwt-token
COMM9_API_KEY=your-api-key
COMM9_DEFAULT_PROVIDER=ollamaError Handling
The client provides enhanced error handling with comm9-specific error types:
try {
await createChatCompletion({ /* ... */ });
} catch (error) {
if (error.name === 'Comm9APIError') {
console.log(`Error type: ${error.type}`);
console.log(`Message: ${error.message}`);
console.log(`Param: ${error.param}`);
console.log(`Status: ${error.status}`);
}
}Type Safety
All request and response types are automatically generated from the OpenAI specification with comm9 extensions:
import type {
CreateChatCompletionRequest,
CreateChatCompletionResponse,
ChatMessage,
Model,
} from '@orchard9ai/comm9-api-client';License
MIT