1.0.5 β€’ Published 6 months ago

@endlessriver/optimaiz v1.0.5

Weekly downloads
-
License
MIT
Repository
-
Last release
6 months ago

@endlessriver/optimaiz

Unified tracing, feedback, and cost analytics for LLM-based apps.
Drop-in SDK to track prompts, responses, cost, errors, and feedback across OpenAI, LangChain, Sarvam, Gemini, and more. Visit https://optimaiz.io to observe, analyze, optimize and comply


πŸ“¦ Installation

npm install @endlessriver/optimaiz

πŸ› οΈ Initialization

import { OptimaizClient } from "@endlessriver/optimaiz";

const optimaiz = new OptimaizClient({
  token: process.env.OPTIMAIZ_API_KEY!,
});

πŸš€ Basic Usage: wrapLLMCall

const { response, traceId } = await optimaiz.wrapLLMCall({
  provider: "openai",
  model: "gpt-4o",
  promptTemplate: [{ role: "user", type: "text", value: "Summarize this" }],
  promptVariables: {},
  call: () => openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Summarize this" }],
  }),
});

This handles:

  • βœ… Start trace
  • βœ… Append raw response
  • βœ… Finalize trace with latency
  • βœ… Log any errors

βš™οΈ Advanced Usage with IDs

const { response } = await optimaiz.wrapLLMCall({
  traceId: "trace_123",
  agentId: "agent:translator",
  userId: "user_456",
  flowId: "translate_email",
  threadId: "email_translation",
  sessionId: "session_2025_06_01_user_456",
  provider: "openai",
  model: "gpt-4o",
  promptTemplate: [{ role: "user", type: "text", value: "Translate to French: {text}" }],
  promptVariables: { text: "Hello, how are you?" },
  call: () => openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Translate to French: Hello, how are you?" }],
  }),
});

🧩 Manual Usage (Start, Append, Finalize)

Sometimes you need lower-level control (e.g., multiple responses, partial logs).

πŸ”Ή Start a trace manually

await optimaiz.startTrace({
  traceId: "trace_xyz",
  agentId: "imageAnalyzer",
  userId: "user_999",
  flowId: "caption_image",
  promptTemplate: [
    { role: "user", type: "image", value: "https://cdn.site/image.png" },
    { role: "user", type: "text", value: "What’s in this image?" }
  ],
  promptVariables: {},
  provider: "openai",
  model: "gpt-4o"
});

πŸ”Ή Append a model response

await optimaiz.appendResponse({
  traceId: "trace_xyz",
  rawResponse: response,
  provider: "openai",
  model: "gpt-4o"
});

πŸ”Ή Finalize the trace

await optimaiz.finalizeTrace("trace_xyz");

❌ Log an Error to a Trace

await optimaiz.logError("trace_abc123", {
  message: "Timeout waiting for OpenAI response",
  code: "TIMEOUT_ERROR",
  details: {
    timeout: "30s",
    model: "gpt-4o",
    retryAttempt: 1,
  },
});

πŸ”§ Example Usage

try {
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Summarize this" }],
  });

  await optimaiz.appendResponse({
    traceId,
    rawResponse: response,
    provider: "openai",
    model: "gpt-4o",
  });

  await optimaiz.finalizeTrace(traceId);
} catch (err: any) {
  await optimaiz.logError(traceId, {
    message: err.message,
    code: err.code || "UNCAUGHT_EXCEPTION",
    details: err.stack,
  });
  throw err;
}

πŸ§ͺ Tool Prompt Helper

const { promptTemplate, promptVariables } = optimaiz.generatePromptFromTools({
  toolInfo: [weatherTool],
  toolInput: { name: "get_weather", arguments: { location: "Delhi" } },
});

πŸ”„ Compose Prompts from Template

const { prompts, promptTemplate, promptVariables } = optimaiz.composePrompts(
  [
    { role: "system", content: "You are a poet." },
    { role: "user", content: "Write a haiku about {topic}" },
  ],
  { topic: "the ocean" }
);

πŸ“‚ Integration Examples

βœ… OpenAI SDK

const userPrompt = "Summarize this blog about AI agents";

await optimaiz.wrapLLMCall({
  provider: "openai",
  model: "gpt-4o",
  agentId: "summarizer",
  userId: "user_123",
  promptTemplate: [{ role: "user", type: "text", value: userPrompt }],
  promptVariables: {},
  call: () => openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: userPrompt }],
  }),
});

βœ… LangChain

const prompt = PromptTemplate.fromTemplate("Tell me a joke about {topic}");
const formatted = await prompt.format({ topic: "elephants" });

await optimaiz.wrapLLMCall({
  provider: "openai",
  model: "gpt-4o",
  agentId: "joke-bot",
  userId: "user_321",
  flowId: "joke-generation",
  promptTemplate: [{ role: "user", type: "text", value: "Tell me a joke about {topic}" }],
  promptVariables: { topic: "elephants" },
  call: () => langchainModel.invoke(formatted),
});

βœ… Sarvam AI (Audio)

await optimaiz.wrapLLMCall({
  provider: "sarvam",
  model: "shivang",
  agentId: "transcriber",
  userId: "user_999",
  flowId: "transcribe",
  promptTemplate: [{ role: "user", type: "audio", value: "https://cdn.site/audio.wav" }],
  promptVariables: {},
  call: () => sarvam.speechToText({ url: "https://cdn.site/audio.wav" }),
});

βœ… Gemini (Google Vertex AI)

await optimaiz.wrapLLMCall({
  provider: "google",
  model: "gemini-pro",
  promptTemplate: [{ role: "user", type: "text", value: "Write a haiku about the ocean." }],
  promptVariables: {},
  call: () => gemini.generateContent({
    contents: [{ role: "user", parts: [{ text: "Write a haiku about the ocean." }] }],
  }),
});

πŸ“Š Field Scope & Best Practices

FieldScopeUsed for...Example Value
traceIdPer actionTrack 1 LLM/tool calltrace_a9f3
flowIdPer taskMulti-step task groupingflow_generate_poem
agentIdPer traceIdentify AI agent handling taskcalendarAgent
threadIdPer topicGroup related flows by theme/intentthread_booking
sessionIdPer sessionTemporal or login-bound groupingsession_2025_06_01_user1
userIdGlobalUsage, feedback, and cost attributionuser_321

βœ… Use These for Full Insight:

  • agentId: Enables per-agent cost & prompt optimization
  • userId: Enables user behavior analytics & pricing insights
  • flowId: Helps trace multi-step user tasks
  • traceId: Use like a span for 1 prompt/response
  • threadId, sessionId: Group related interactions over time or topics

✨ Optimaiz Features

  • βœ… Works with OpenAI, Gemini, Sarvam, Mistral, LangChain, Anthropic
  • 🧠 RAG and function/tool-call support
  • πŸ” Token usage + latency tracking
  • πŸ“‰ Cost and model metadata logging
  • πŸ§ͺ Error + feedback logging
  • πŸ”„ Templated prompt builder + tool integration support
  • 🧩 Full control via start/append/finalize or simple wrapLLMCall

πŸ”— Get Started

  1. Install: npm install @endlessriver/optimaiz
  2. Add your API key: process.env.OPTIMAIZ_API_KEY
  3. Use wrapLLMCall() for LLM/tool calls
  4. Pass userId, agentId, and flowId for best observability
  5. Analyze and improve prompt cost, user flow, and LLM performance

Need hosted dashboards, insights, or tuning support?
Visit πŸ‘‰ https://optimaiz.io