0.1.13 β€’ Published 7 months ago

@llmonitor/sdk v0.1.13

Weekly downloads
-
License
MIT
Repository
github
Last release
7 months ago

llmonitor

Model-agnostic LLM Observability & Cost Intelligence

Monitor, track, and optimize your LLM usage across any provider with just 3 lines of code.

πŸš€ Quick Start

npm install @llmonitor/sdk
import { LLMonitor } from "@llmonitor/sdk";
import OpenAI from "openai";

// 1. Initialize LLMonitor
const monitor = new LLMonitor({
  apiKey: "your-api-key", // Get this from your LLMonitor dashboard
  // baseURL es opcional, por defecto https://api.llmonitor.io
});

// 2. Wrap your LLM client
const openai = monitor.openai(new OpenAI({ apiKey: "your-openai-key" }));

// 3. Use exactly like normal - automatically logged!
const response = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "Hello!" }],
});
// CommonJS (require) usage
const { LLMonitor } = require("@llmonitor/sdk");
const OpenAI = require("openai");

const monitor = new LLMonitor({ apiKey: "your-api-key" });
const openai = monitor.openai(new OpenAI({ apiKey: "your-openai-key" }));

openai.chat.completions
  .create({
    model: "gpt-4",
    messages: [{ role: "user", content: "Hello!" }],
  })
  .then(console.log);

That's it! Every request is now automatically tracked with:

  • βœ… Token usage & costs
  • βœ… Latency metrics
  • βœ… Error tracking
  • βœ… Session grouping

πŸƒβ€β™‚οΈ Get Your API Key

  1. Sign up at LLMonitor Dashboard
  2. Organization created automatically - No setup needed!
  3. Copy your API key from Settings

Your organization and API key are created automatically when you sign up - no manual configuration required!

πŸ“Š What Gets Tracked

  • Costs: Automatic cost calculation for all major providers
  • Performance: Latency, token usage, success rates
  • Context: Sessions, versions, custom metadata
  • Errors: Failed requests with detailed error info

🎯 Supported Providers

ProviderSupportAuto-PricingModels
OpenAIβœ… Fullβœ…GPT-4, GPT-3.5, GPT-4o
Anthropicβœ… Fullβœ…Claude-3, Claude-3.5
Google AIβœ… Fullβœ…Gemini Pro, Flash
Cohereβœ… Fullβœ…Command, Command-R
DeepSeekβœ… Fullβœ…deepseek-chat, deepseek-reasoner

πŸ“– Provider Examples

OpenAI

import { LLMonitor } from "@llmonitor/sdk";
import OpenAI from "openai";

const monitor = new LLMonitor({ apiKey: "llm_..." });
const openai = monitor.openai(new OpenAI());

const response = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "Explain quantum computing" }],
});

Anthropic

import { LLMonitor } from "@llmonitor/sdk";
import Anthropic from "@anthropic-ai/sdk";

const monitor = new LLMonitor({ apiKey: "llm_..." });
const anthropic = monitor.anthropic(new Anthropic());

const response = await anthropic.messages.create({
  model: "claude-3-5-sonnet-20241022",
  max_tokens: 1000,
  messages: [{ role: "user", content: "Write a haiku about code" }],
});

Google AI

import { LLMonitor } from "@llmonitor/sdk";
import { GoogleGenerativeAI } from "@google/generative-ai";

const monitor = new LLMonitor({ apiKey: "llm_..." });
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY);
const model = monitor.google(genAI.getGenerativeModel({ model: "gemini-pro" }));

const result = await model.generateContent("Explain machine learning");

DeepSeek

DeepSeek is fully compatible with the OpenAI SDK. You just need to use the OpenAI SDK and set the baseURL to DeepSeek along with your DeepSeek API key. You can use the models deepseek-chat or deepseek-reasoner.

import { LLMonitor } from "@llmonitor/sdk";
import OpenAI from "openai";

const monitor = new LLMonitor({ apiKey: "llm_..." });
const deepseek = monitor.deepseek(
  new OpenAI({
    apiKey: "your-deepseek-key",
    baseURL: "https://api.deepseek.com",
  })
);

const response = await deepseek.chat.completions.create({
  model: "deepseek-chat",
  messages: [{ role: "user", content: "What is DeepSeek?" }],
});

Note: You can use any client compatible with the OpenAI API, just make sure to set the baseURL to https://api.deepseek.com and use your DeepSeek API key. More details in the official DeepSeek documentation.

πŸ§‘β€πŸ’» CommonJS (require) usage

Si usas Node.js clΓ‘sico o una API vieja:

const { LLMonitor } = require("@llmonitor/sdk");
const OpenAI = require("openai");

const monitor = new LLMonitor({ apiKey: "llm_..." });
const openai = monitor.openai(new OpenAI());

openai.chat.completions
  .create({
    model: "gpt-4",
    messages: [{ role: "user", content: "Hello!" }],
  })
  .then(console.log);

TambiΓ©n funciona con DeepSeek:

const { LLMonitor } = require("@llmonitor/sdk");
const DeepSeek = require("deepseek-openai");

const monitor = new LLMonitor({ apiKey: "llm_..." });
const deepseek = monitor.deepseek(
  new DeepSeek({ apiKey: "your-deepseek-key" })
);

deepseek.chat.completions
  .create({
    model: "deepseek-chat",
    messages: [{ role: "user", content: "What is DeepSeek?" }],
  })
  .then(console.log);

βš™οΈ Configuration

const monitor = new LLMonitor({
  apiKey: "llm_...", // Required: Your LLMonitor API key
  // baseURL es opcional, por defecto https://api.llmonitor.io
  sessionId: "user-123", // Optional: Group requests by session
  versionTag: "v1.2.0", // Optional: Track different versions
  debug: true, // Optional: Enable debug logging
  enabled: true, // Optional: Toggle monitoring
  metadata: {
    // Optional: Custom metadata
    userId: "user-123",
    feature: "chat",
  },
});

πŸ”§ Express.js Middleware

Track all LLM calls across your Express app:

import express from "express";
import { LLMonitor } from "@llmonitor/sdk";

const app = express();
const monitor = new LLMonitor({ apiKey: "llm_..." });

// Add middleware
app.use(monitor.express({
  sessionId: (req) => req.user?.id,
  metadata: (req) => ({ route: req.route?.path })
}));

// Now all LLM calls in your routes are automatically tracked
app.post("/chat", async (req, res) => {
  const openai = monitor.openai(new OpenAI());
  const response = await openai.chat.completions.create({...});
  res.json(response);
});

πŸŽ›οΈ Advanced Usage

Session Tracking

Group related requests together:

const monitor = new LLMonitor({
  apiKey: "llm_...",
  sessionId: "conversation-abc-123",
});

// All requests will be grouped under this session

Version Tagging

Track different prompt versions:

const monitor = new LLMonitor({
  apiKey: "llm_...",
  versionTag: "prompt-v2.1",
});

Per-Request Options

Override config for specific requests:

const openai = monitor.openai(new OpenAI());

await openai.chat.completions.create(
  {
    model: "gpt-4",
    messages: [{ role: "user", content: "Hello" }],
  },
  {
    sessionId: "special-session",
    versionTag: "experimental",
    metadata: { feature: "onboarding" },
  }
);

Manual Event Logging

For custom integrations:

await monitor.logEvent({
  provider: "custom",
  model: "my-model",
  prompt: "Hello world",
  completion: "Hi there!",
  prompt_tokens: 10,
  completion_tokens: 5,
  latency_ms: 250,
  status: 200,
  cost_usd: 0.001,
});

πŸ”„ Error Handling

The SDK gracefully handles errors and never interrupts your LLM calls:

const monitor = new LLMonitor({
  apiKey: "llm_...",
  debug: true  // See any monitoring errors in console
});

// Even if LLMonitor is down, your OpenAI calls continue normally
const openai = monitor.openai(new OpenAI());
const response = await openai.chat.completions.create({...}); // Always works

πŸ—οΈ TypeScript Support

Full TypeScript support with proper types:

import { LLMonitor, LLMEvent, LLMonitorConfig } from "@llmonitor/sdk";

const config: LLMonitorConfig = {
  apiKey: "llm_...",
  debug: true,
};

const monitor = new LLMonitor(config);

πŸ“š Links

🀝 Contributing

We love contributions! Check out our contributing guide.

πŸ“„ License

MIT License - see LICENSE file.

0.1.13

7 months ago

0.1.12

7 months ago

0.1.10

7 months ago

0.1.8

7 months ago

0.1.5

7 months ago

0.1.4

7 months ago

0.1.3

7 months ago

0.1.2

7 months ago

0.1.1

7 months ago

0.1.0

7 months ago