1.0.0 • Published 10 months ago
@alicantorun/aimetrics-sdk v1.0.0
AiMetrics SDK
A comprehensive SDK for tracking and analyzing LLM/AI system metrics and performance.
Installation
npm install @aimetrics/sdk
If you're using OpenAI (which is a peer dependency):
npm install @aimetrics/sdk openai
Quick Start
import { AiMetricsTracker } from "@aimetrics/sdk";
import OpenAI from "openai";
// Initialize the metrics tracker
const metrics = new AiMetricsTracker({
apiKey: "your-metrics-api-key", // Get this from AiMetrics dashboard
clientId: "your-client-id", // Your unique identifier
endpoint: "https://api.aimetrics.ai", // Optional: defaults to localhost:3001
batchSize: 10, // Optional: batch size for sending metrics
flushInterval: 5000, // Optional: flush interval in ms
debug: true, // Optional: enable debug logging
});
// Initialize your LLM client
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Track LLM calls
async function chatCompletion(messages) {
return await metrics.track(
{
model: "gpt-3.5-turbo",
messages,
},
async () => {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages,
});
return response;
}
);
}
// Example usage
async function main() {
try {
const response = await chatCompletion([
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello, how are you?" },
]);
console.log(response.choices[0].message);
} catch (error) {
console.error("Error:", error);
}
}
// Get metrics
async function getMetrics() {
const startDate = new Date(Date.now() - 24 * 60 * 60 * 1000); // Last 24 hours
const metrics = await metrics.getMetrics(startDate);
console.log("Metrics:", metrics);
}
// Clean up when done
function cleanup() {
metrics.destroy();
}
Features
- Real-time metrics tracking
- Token usage monitoring
- Cost calculation
- Response quality analysis
- Performance metrics
- Content analysis
- Batch processing
- Error tracking
Configuration
The SDK accepts the following configuration options:
Option | Type | Required | Default | Description |
---|---|---|---|---|
apiKey | string | Yes | - | Your AiMetrics API key |
clientId | string | Yes | - | Your unique client identifier |
endpoint | string | No | http://localhost:3001/api | Custom metrics server endpoint |
batchSize | number | No | 10 | Number of metrics to batch before sending |
flushInterval | number | No | 5000 | Interval in ms to flush metrics |
debug | boolean | No | false | Enable debug logging |
Metrics Tracked
Basic Metrics
- Total calls
- Success/failure rate
- Response times
- Token usage
- Costs
Quality Metrics
- Response coherence
- Relevance scores
- Toxicity detection
- Content analysis
Performance Metrics
- Time to first token
- Tokens per second
- Retry counts
- Error rates
Content Analysis
- Word count
- Code snippet detection
- Programming languages used
- Sentiment analysis
- Average word length
Error Handling
The SDK includes comprehensive error handling:
try {
const response = await metrics.track(
{
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello" }],
},
async () => {
// Your LLM call here
}
);
} catch (error) {
if (error.response) {
console.error("API Error:", error.response.data);
} else {
console.error("Error:", error.message);
}
}
Best Practices
- Initialize the tracker once and reuse the instance
- Use appropriate batch sizes for your use case
- Call
destroy()
when shutting down your application - Enable debug mode during development
- Handle errors appropriately
- Use environment variables for sensitive data
Support
For issues and feature requests, please visit our GitHub repository.
License
MIT License
1.0.0
10 months ago