@llmonitor/sdk v0.1.13
llmonitor
Model-agnostic LLM Observability & Cost Intelligence
Monitor, track, and optimize your LLM usage across any provider with just 3 lines of code.
π Quick Start
npm install @llmonitor/sdkimport { LLMonitor } from "@llmonitor/sdk";
import OpenAI from "openai";
// 1. Initialize LLMonitor
const monitor = new LLMonitor({
apiKey: "your-api-key", // Get this from your LLMonitor dashboard
// baseURL es opcional, por defecto https://api.llmonitor.io
});
// 2. Wrap your LLM client
const openai = monitor.openai(new OpenAI({ apiKey: "your-openai-key" }));
// 3. Use exactly like normal - automatically logged!
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
});// CommonJS (require) usage
const { LLMonitor } = require("@llmonitor/sdk");
const OpenAI = require("openai");
const monitor = new LLMonitor({ apiKey: "your-api-key" });
const openai = monitor.openai(new OpenAI({ apiKey: "your-openai-key" }));
openai.chat.completions
.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
})
.then(console.log);That's it! Every request is now automatically tracked with:
- β Token usage & costs
- β Latency metrics
- β Error tracking
- β Session grouping
πββοΈ Get Your API Key
- Sign up at LLMonitor Dashboard
- Organization created automatically - No setup needed!
- Copy your API key from Settings
Your organization and API key are created automatically when you sign up - no manual configuration required!
π What Gets Tracked
- Costs: Automatic cost calculation for all major providers
- Performance: Latency, token usage, success rates
- Context: Sessions, versions, custom metadata
- Errors: Failed requests with detailed error info
π― Supported Providers
| Provider | Support | Auto-Pricing | Models |
|---|---|---|---|
| OpenAI | β Full | β | GPT-4, GPT-3.5, GPT-4o |
| Anthropic | β Full | β | Claude-3, Claude-3.5 |
| Google AI | β Full | β | Gemini Pro, Flash |
| Cohere | β Full | β | Command, Command-R |
| DeepSeek | β Full | β | deepseek-chat, deepseek-reasoner |
π Provider Examples
OpenAI
import { LLMonitor } from "@llmonitor/sdk";
import OpenAI from "openai";
const monitor = new LLMonitor({ apiKey: "llm_..." });
const openai = monitor.openai(new OpenAI());
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Explain quantum computing" }],
});Anthropic
import { LLMonitor } from "@llmonitor/sdk";
import Anthropic from "@anthropic-ai/sdk";
const monitor = new LLMonitor({ apiKey: "llm_..." });
const anthropic = monitor.anthropic(new Anthropic());
const response = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1000,
messages: [{ role: "user", content: "Write a haiku about code" }],
});Google AI
import { LLMonitor } from "@llmonitor/sdk";
import { GoogleGenerativeAI } from "@google/generative-ai";
const monitor = new LLMonitor({ apiKey: "llm_..." });
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY);
const model = monitor.google(genAI.getGenerativeModel({ model: "gemini-pro" }));
const result = await model.generateContent("Explain machine learning");DeepSeek
DeepSeek is fully compatible with the OpenAI SDK. You just need to use the OpenAI SDK and set the baseURL to DeepSeek along with your DeepSeek API key. You can use the models deepseek-chat or deepseek-reasoner.
import { LLMonitor } from "@llmonitor/sdk";
import OpenAI from "openai";
const monitor = new LLMonitor({ apiKey: "llm_..." });
const deepseek = monitor.deepseek(
new OpenAI({
apiKey: "your-deepseek-key",
baseURL: "https://api.deepseek.com",
})
);
const response = await deepseek.chat.completions.create({
model: "deepseek-chat",
messages: [{ role: "user", content: "What is DeepSeek?" }],
});Note: You can use any client compatible with the OpenAI API, just make sure to set the
baseURLtohttps://api.deepseek.comand use your DeepSeek API key. More details in the official DeepSeek documentation.
π§βπ» CommonJS (require) usage
Si usas Node.js clΓ‘sico o una API vieja:
const { LLMonitor } = require("@llmonitor/sdk");
const OpenAI = require("openai");
const monitor = new LLMonitor({ apiKey: "llm_..." });
const openai = monitor.openai(new OpenAI());
openai.chat.completions
.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
})
.then(console.log);TambiΓ©n funciona con DeepSeek:
const { LLMonitor } = require("@llmonitor/sdk");
const DeepSeek = require("deepseek-openai");
const monitor = new LLMonitor({ apiKey: "llm_..." });
const deepseek = monitor.deepseek(
new DeepSeek({ apiKey: "your-deepseek-key" })
);
deepseek.chat.completions
.create({
model: "deepseek-chat",
messages: [{ role: "user", content: "What is DeepSeek?" }],
})
.then(console.log);βοΈ Configuration
const monitor = new LLMonitor({
apiKey: "llm_...", // Required: Your LLMonitor API key
// baseURL es opcional, por defecto https://api.llmonitor.io
sessionId: "user-123", // Optional: Group requests by session
versionTag: "v1.2.0", // Optional: Track different versions
debug: true, // Optional: Enable debug logging
enabled: true, // Optional: Toggle monitoring
metadata: {
// Optional: Custom metadata
userId: "user-123",
feature: "chat",
},
});π§ Express.js Middleware
Track all LLM calls across your Express app:
import express from "express";
import { LLMonitor } from "@llmonitor/sdk";
const app = express();
const monitor = new LLMonitor({ apiKey: "llm_..." });
// Add middleware
app.use(monitor.express({
sessionId: (req) => req.user?.id,
metadata: (req) => ({ route: req.route?.path })
}));
// Now all LLM calls in your routes are automatically tracked
app.post("/chat", async (req, res) => {
const openai = monitor.openai(new OpenAI());
const response = await openai.chat.completions.create({...});
res.json(response);
});ποΈ Advanced Usage
Session Tracking
Group related requests together:
const monitor = new LLMonitor({
apiKey: "llm_...",
sessionId: "conversation-abc-123",
});
// All requests will be grouped under this sessionVersion Tagging
Track different prompt versions:
const monitor = new LLMonitor({
apiKey: "llm_...",
versionTag: "prompt-v2.1",
});Per-Request Options
Override config for specific requests:
const openai = monitor.openai(new OpenAI());
await openai.chat.completions.create(
{
model: "gpt-4",
messages: [{ role: "user", content: "Hello" }],
},
{
sessionId: "special-session",
versionTag: "experimental",
metadata: { feature: "onboarding" },
}
);Manual Event Logging
For custom integrations:
await monitor.logEvent({
provider: "custom",
model: "my-model",
prompt: "Hello world",
completion: "Hi there!",
prompt_tokens: 10,
completion_tokens: 5,
latency_ms: 250,
status: 200,
cost_usd: 0.001,
});π Error Handling
The SDK gracefully handles errors and never interrupts your LLM calls:
const monitor = new LLMonitor({
apiKey: "llm_...",
debug: true // See any monitoring errors in console
});
// Even if LLMonitor is down, your OpenAI calls continue normally
const openai = monitor.openai(new OpenAI());
const response = await openai.chat.completions.create({...}); // Always worksποΈ TypeScript Support
Full TypeScript support with proper types:
import { LLMonitor, LLMEvent, LLMonitorConfig } from "@llmonitor/sdk";
const config: LLMonitorConfig = {
apiKey: "llm_...",
debug: true,
};
const monitor = new LLMonitor(config);π Links
- Dashboard - View your metrics
- Documentation - Complete guides
- GitHub - Source code
π€ Contributing
We love contributions! Check out our contributing guide.
π License
MIT License - see LICENSE file.