@lmnr-ai/lmnr v0.6.10
Laminar Typescript
JavaScript/TypeScript SDK for Laminar.
Laminar is an open-source platform for engineering LLM products. Trace, evaluate, annotate, and analyze LLM data. Bring LLM applications to production with confidence.
Check our open-source repo and don't forget to star it ⭐
Quickstart
npm install @lmnr-ai/lmnrAnd then in the code
import { Laminar } from '@lmnr-ai/lmnr'
Laminar.initialize({ projectApiKey: '<PROJECT_API_KEY>' })This will automatically instrument most of the LLM, Vector DB, and related calls with OpenTelemetry-compatible instrumentation.
Read docs to learn more.
Auto-instrumentations are provided by OpenLLMetry.
Where to place Laminar.initialize()
Laminar.initialize() must be called
- once in your application,
- as early as possible, but after other instrumentation libraries
Instrumentation
In addition to automatic instrumentation, we provide a simple @observe() decorator.
This can be useful if you want to trace a request handler or a function which combines multiple LLM calls.
Example
import { OpenAI } from 'openai';
import { Laminar as L, observe } from '@lmnr-ai/lmnr';
L.initialize({ projectApiKey: "<LMNR_PROJECT_API_KEY>" });
const client = new OpenAI({ apiKey: '<OPENAI_API_KEY>' });
const poemWriter = async (topic = "turbulence") => {
const prompt = `write a poem about ${topic}`;
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: prompt }
]
});
const poem = response.choices[0].message.content;
return poem;
}
// Observe the function like this
await observe({name: 'poemWriter'}, async () => await poemWriter('laminar flow'))Sending spans to Laminar from a different tracing library
Many tracing libraries accept spanProcessors as an initialization parameter.
Laminar exposes LaminarSpanProcessor that you could use for these purposes.
Be careful NOT to call Laminar.initialize in such setup, to avoid double tracing.
Example with @vercel/otel
For example, in Next.js instrumentation.ts you could do:
import { registerOTel } from '@vercel/otel'
export async function register() {
if (process.env.NEXT_RUNTIME === "nodejs") {
const { Laminar, LaminarSpanProcessor, initializeLaminarInstrumentations } = await import("@lmnr-ai/lmnr");
registerOTel({
serviceName: "my-service",
spanProcessors: [
new LaminarSpanProcessor(),
],
instrumentations: initializeLaminarInstrumentations(),
});
}
}Evaluations
Quickstart
Install the package:
npm install @lmnr-ai/lmnrCreate a file named my-first-eval.ts with the following code:
import { evaluate } from '@lmnr-ai/lmnr';
const writePoem = ({topic}: {topic: string}) => {
return `This is a good poem about ${topic}`
}
evaluate({
data: [
{ data: { topic: 'flowers' }, target: { poem: 'This is a good poem about flowers' } },
{ data: { topic: 'cars' }, target: { poem: 'I like cars' } },
],
executor: (data) => writePoem(data),
evaluators: {
containsPoem: (output, target) => target.poem.includes(output) ? 1 : 0
},
groupId: 'my_first_feature'
})Run the following commands:
export LMNR_PROJECT_API_KEY=<LMNR_PROJECT_API_KEY> # get from Laminar project settings
npx lmnr eval my-first-eval.tsVisit the URL printed in the console to see the results.
Overview
Bring rigor to the development of your LLM applications with evaluations.
You can run evaluations locally by providing executor (part of the logic used in your application) and evaluators (numeric scoring functions) to evaluate function.
evaluate takes in the following parameters:
data– an array ofDatapointobjects, where eachDatapointhas two keys:targetanddata, each containing a key-value object.executor– the logic you want to evaluate. This function must takedataas the first argument, and produce any output.evaluators– Object which maps evaluator names to evaluators. Each evaluator is a function that takes output of executor as the first argument,targetas the second argument and produces numeric scores. Each function can produce either a single number orRecord<string, number>of scores.name– optional name for the evaluation. Automatically generated if not provided.groupName– optional group name for evaluation. Evaluations within the same group can be compared visually side by side.config– optional additional override parameters.
* If you already have the outputs of executors you want to evaluate, you can specify the executor as an identity function, that takes in data and returns only needed value(s) from it.
Read docs to learn more about evaluations.
Client for HTTP operations
Various interactions with Laminar API are available in LaminarClient
Agent
To run Laminar agent, you can invoke client.agent.run
import { LaminarClient } from '@lmnr-ai/lmnr';
const client = new LaminarClient({
projectApiKey:"<YOUR_PROJECT_API_KEY>",
});
const response = await client.agent.run({
prompt: "What is the weather in London today?",
});
// Be careful, `response` itself contains the state which may get large
console.log(response.result.content)Streaming
Agent run supports streaming as well.
import { LaminarClient } from '@lmnr-ai/lmnr';
const client = new LaminarClient({
projectApiKey:"<YOUR_PROJECT_API_KEY>",
});
const response = await client.agent.run({
prompt: "What is the weather in London today?",
});
for await (const chunk of response) {
console.log(chunk.chunkType)
if (chunk.chunkType === 'step') {
console.log(chunk.summary);
} else if (chunk.chunkType === 'finalOutput') {
// Be careful, `chunk.content` contains the state which may get large
console.log(chunk.content.result);
}
}9 months ago
10 months ago
10 months ago
6 months ago
6 months ago
6 months ago
6 months ago
6 months ago
11 months ago
11 months ago
11 months ago
10 months ago
10 months ago
10 months ago
11 months ago
10 months ago
11 months ago
11 months ago
8 months ago
8 months ago
8 months ago
1 year ago
1 year ago
12 months ago
6 months ago
11 months ago
12 months ago
12 months ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
7 months ago
7 months ago
6 months ago
7 months ago
7 months ago
7 months ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago