1.0.0 • Published 10 months ago

@alicantorun/aimetrics-sdk v1.0.0

Weekly downloads
-
License
MIT
Repository
-
Last release
10 months ago

AiMetrics SDK

A comprehensive SDK for tracking and analyzing LLM/AI system metrics and performance.

Installation

npm install @aimetrics/sdk

If you're using OpenAI (which is a peer dependency):

npm install @aimetrics/sdk openai

Quick Start

import { AiMetricsTracker } from "@aimetrics/sdk";
import OpenAI from "openai";

// Initialize the metrics tracker
const metrics = new AiMetricsTracker({
    apiKey: "your-metrics-api-key", // Get this from AiMetrics dashboard
    clientId: "your-client-id", // Your unique identifier
    endpoint: "https://api.aimetrics.ai", // Optional: defaults to localhost:3001
    batchSize: 10, // Optional: batch size for sending metrics
    flushInterval: 5000, // Optional: flush interval in ms
    debug: true, // Optional: enable debug logging
});

// Initialize your LLM client
const openai = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
});

// Track LLM calls
async function chatCompletion(messages) {
    return await metrics.track(
        {
            model: "gpt-3.5-turbo",
            messages,
        },
        async () => {
            const response = await openai.chat.completions.create({
                model: "gpt-3.5-turbo",
                messages,
            });
            return response;
        }
    );
}

// Example usage
async function main() {
    try {
        const response = await chatCompletion([
            { role: "system", content: "You are a helpful assistant." },
            { role: "user", content: "Hello, how are you?" },
        ]);
        console.log(response.choices[0].message);
    } catch (error) {
        console.error("Error:", error);
    }
}

// Get metrics
async function getMetrics() {
    const startDate = new Date(Date.now() - 24 * 60 * 60 * 1000); // Last 24 hours
    const metrics = await metrics.getMetrics(startDate);
    console.log("Metrics:", metrics);
}

// Clean up when done
function cleanup() {
    metrics.destroy();
}

Features

  • Real-time metrics tracking
  • Token usage monitoring
  • Cost calculation
  • Response quality analysis
  • Performance metrics
  • Content analysis
  • Batch processing
  • Error tracking

Configuration

The SDK accepts the following configuration options:

OptionTypeRequiredDefaultDescription
apiKeystringYes-Your AiMetrics API key
clientIdstringYes-Your unique client identifier
endpointstringNohttp://localhost:3001/apiCustom metrics server endpoint
batchSizenumberNo10Number of metrics to batch before sending
flushIntervalnumberNo5000Interval in ms to flush metrics
debugbooleanNofalseEnable debug logging

Metrics Tracked

Basic Metrics

  • Total calls
  • Success/failure rate
  • Response times
  • Token usage
  • Costs

Quality Metrics

  • Response coherence
  • Relevance scores
  • Toxicity detection
  • Content analysis

Performance Metrics

  • Time to first token
  • Tokens per second
  • Retry counts
  • Error rates

Content Analysis

  • Word count
  • Code snippet detection
  • Programming languages used
  • Sentiment analysis
  • Average word length

Error Handling

The SDK includes comprehensive error handling:

try {
    const response = await metrics.track(
        {
            model: "gpt-3.5-turbo",
            messages: [{ role: "user", content: "Hello" }],
        },
        async () => {
            // Your LLM call here
        }
    );
} catch (error) {
    if (error.response) {
        console.error("API Error:", error.response.data);
    } else {
        console.error("Error:", error.message);
    }
}

Best Practices

  1. Initialize the tracker once and reuse the instance
  2. Use appropriate batch sizes for your use case
  3. Call destroy() when shutting down your application
  4. Enable debug mode during development
  5. Handle errors appropriately
  6. Use environment variables for sensitive data

Support

For issues and feature requests, please visit our GitHub repository.

License

MIT License