0.2.0 • Published 9 months ago

lmsystems v0.2.0

Weekly downloads
-
License
MIT
Repository
-
Last release
9 months ago

LMSystems SDK

npm version License: MIT

The official Node.js SDK for integrating purchased graphs from the LMSystems Marketplace.

Overview

This SDK provides two main ways to integrate LMSystems marketplace graphs:

  1. LmsystemsClient: A standalone client for direct chat/streaming interactions with purchased graphs
  2. PurchasedGraph: A class for using purchased graphs as subgraphs within your own LangGraph applications

Installation

npm install lmsystems

Prerequisites

  • Node.js >= 18
  • An LMSystems account with purchased graphs
  • Your LMSystems API key (found at lmsystems.ai/account)

Quick Start

LmsystemsClient

A high-level client for interacting with LMSystems marketplace graphs as chat applications. The client provides thread management, streaming capabilities, and metadata filtering.

Basic Usage

The simplest way to use the client:

import { LmsystemsClient } from 'lmsystems';
import dotenv from 'dotenv';

dotenv.config();

async function main() {
    // Initialize client
    const client = new LmsystemsClient({
        graphName: 'github-agent-48',
        apiKey: process.env.LMSYSTEMS_API_KEY
    });
    await client.setup();

    try {
        // Create a conversation thread
        const thread = await client.createThread();

        // Stream a response
        const stream = client.stream(
            {
                messages: [{ role: 'user', content: "hello" }],
                repo_url: 'https://github.com/example/repo',
                github_token: process.env.GITHUB_TOKEN,
                model_name: 'gpt-4'
            },
            {
                threadId: thread.thread_id,
                streamMode: ["messages-tuple"]
            }
        );

        // Process the response
        for await (const chunk of stream) {
            console.log(chunk);
        }

        // Clean up
        await client.deleteThread(thread.thread_id);
    } catch (error) {
        console.error('Error:', error);
    }
}

Thread Management

The client provides comprehensive thread management for maintaining conversation context:

// Create a thread with metadata
const thread = await client.createThread({
    metadata: {
        conversation_id: 'chat-123',
        project: 'github-analysis'
    }
});

// Get conversation history
const history = await client.getThreadHistory(thread.thread_id, {
    limit: 10,
    metadata: { source: 'chat' }
});

// Create a branch of the conversation
const branchedThread = await client.copyThread(thread.thread_id);

// Get current conversation state
const state = await client.getThreadState(thread.thread_id);

Advanced Streaming

Configure streaming behavior with various options:

const stream = client.stream(
    {
        messages: [{ role: 'user', content: 'Hello' }],
        // Add graph-specific parameters
        model: 'gpt-4',
        temperature: 0.7
    },
    {
        threadId: thread.thread_id,
        streamMode: ["values", "messages", "debug"],
        streamSubgraphs: true,
        metadata: { source: "custom" },
        multitaskStrategy: "interrupt",
        config: {
            tags: ["custom-tag"],
            recursion_limit: 50,
            configurable: {
                custom_option: "value"
            }
        }
    }
);

for await (const chunk of stream) {
    console.log('Response:', chunk);
}

Multi-turn Conversations

Maintain context across multiple interactions:

// First message
const firstResponse = client.stream(
    {
        messages: [{ role: 'user', content: "What's the main technology stack?" }],
        repo_url: 'https://github.com/example/repo'
    },
    { threadId: thread.thread_id }
);

// Follow-up using the same thread
const followUpResponse = client.stream(
    {
        messages: [{ role: 'user', content: "What are the main features?" }],
        repo_url: 'https://github.com/example/repo'
    },
    { threadId: thread.thread_id }
);

API Reference

Constructor
new LmsystemsClient({
    graphName: string,    // Name of your purchased graph
    apiKey: string        // Your LMSystems API key
})
Core Methods
  • setup(): Promise<void>

    • Initializes the client
    • Must be called before using other methods
    • Throws AuthenticationError, GraphError, or APIError on failure
  • stream(input: Record<string, any>, options?: StreamOptions): AsyncGenerator

    • Streams responses from the graph
    • input: Graph-specific input parameters
    • options:
      • threadId?: string - Thread ID for conversation continuity
      • streamMode?: ("values" | "messages" | "debug" | "events")[]
      • streamSubgraphs?: boolean - Stream subgraph execution details
      • metadata?: Record<string, any> - Session metadata
      • config?: Record<string, any> - Graph configuration
Thread Management Methods
  • createThread(options?: ThreadOptions): Promise<FilteredThread>

    • Creates or retrieves a thread
    • options:
      • threadId?: string - ID of existing thread
      • metadata?: Record<string, any> - Thread metadata
  • getThread(threadId: string): Promise<FilteredThread>

    • Retrieves thread by ID
  • deleteThread(threadId: string): Promise<void>

    • Deletes a thread and its resources
  • copyThread(threadId: string): Promise<FilteredThread>

    • Creates a copy of an existing thread
  • getThreadState(threadId: string): Promise<FilteredThreadState>

    • Gets current thread state
  • getThreadHistory(threadId: string, options?: ThreadHistoryOptions): Promise<FilteredThreadState[]>

    • Retrieves thread history
    • options:
      • limit?: number - Max states to retrieve
      • metadata?: Record<string, any> - Filter by metadata
      • before?: string - Timestamp filter

Error Handling

The client provides specific error types for better error handling:

try {
    await client.setup();
    const thread = await client.createThread();
    const stream = client.stream(input, { threadId: thread.thread_id });

    for await (const chunk of stream) {
        console.log(chunk);
    }
} catch (error) {
    if (error instanceof AuthenticationError) {
        console.error('Invalid API key');
    } else if (error instanceof GraphError) {
        console.error('Graph not found or not purchased');
    } else if (error instanceof APIError) {
        console.error('API error:', error.message);
    }
}

Using as a Subgraph (PurchasedGraph)

The PurchasedGraph class allows you to incorporate purchased marketplace graphs into your own LangGraph applications as nodes. This enables you to build complex applications that leverage pre-built, specialized graphs.

Basic Integration

Here's a simple example of incorporating a purchased graph into your LangGraph application:

import { PurchasedGraph } from 'lmsystems';
import { StateGraph, MessagesAnnotation, START } from "@langchain/langgraph";
import dotenv from 'dotenv';

dotenv.config();

async function main() {
    // Initialize with default state values
    const stateValues = {
        repo_url: 'https://github.com/example/repo',
        github_token: process.env.GITHUB_TOKEN,
        model: 'gpt-4'
    };

    // Create the purchased graph instance
    const purchasedGraph = new PurchasedGraph(
        "github-agent-48",           // Your purchased graph name
        process.env.LMSYSTEMS_API_KEY,
        undefined,                   // Optional config
        stateValues                  // Default state values
    );

    // Wait for initialization before using
    await purchasedGraph.waitForInitialization();

    // Create your parent graph
    const parentGraph = new StateGraph(MessagesAnnotation);

    // Add the purchased graph as a node
    parentGraph.addNode("purchased_node", purchasedGraph);
    parentGraph.addEdge(START, "purchased_node");

    // Compile the graph
    const graph = parentGraph.compile();

    // Use the graph
    const result = await graph.invoke({
        messages: [{ role: "user", content: "What's this repo about?" }]
    });
}

Advanced Usage

The PurchasedGraph class provides both synchronous and streaming interfaces:

// Synchronous invocation
const result = await graph.invoke({
    messages: [{ role: "user", content: "Analyze this repo" }]
});

// Streaming with subgraph details
for await (const chunk of await graph.stream({
    messages: [{ role: "user", content: "What's this repo about?" }]
}, {
    subgraphs: true,
    configurable: {
        messagesKey: "messages"
    }
})) {
    console.log("Chunk:", chunk);
}

API Reference

Constructor
constructor(
    graphName: string,                    // Name of purchased graph
    apiKey: string,                       // LMSystems API key
    config?: RunnableConfig,              // Optional LangChain config
    defaultStateValues?: Record<string, any>, // Default values for graph state
    baseUrl?: string,                     // Optional API base URL
    developmentMode?: boolean             // Optional dev mode flag
)
Methods
  • waitForInitialization(): Promise<void>

    • Ensures the graph is fully initialized
    • Must be called before using the graph
    • Returns a promise that resolves when initialization is complete
  • invoke(input: Record<string, any>, options?: RunnableConfig): Promise<Record<string, BaseMessage[]>>

    • Executes the graph synchronously
    • Returns complete results as messages
    • input: Graph input parameters
    • options: Optional LangChain configuration
  • stream(input: Record<string, any>, options?: RunnableConfig): Promise<IterableReadableStream<BaseMessage | StateUpdate>>

    • Executes the graph with streaming output
    • Returns an async iterable of messages and state updates
    • input: Graph input parameters
    • options: Optional streaming configuration
State Updates

The stream method can provide state updates during execution:

interface StateUpdate {
    nodeIds: string[];      // Active nodes
    state: Record<string, any>; // Current graph state
}

Best Practices

  1. Always wait for initialization:
const purchasedGraph = new PurchasedGraph(/* ... */);
await purchasedGraph.waitForInitialization();
  1. Provide default state values for consistent behavior:
const purchasedGraph = new PurchasedGraph(
    "graph-name",
    apiKey,
    undefined,
    {
        model: 'gpt-4',
        temperature: 0.7,
        // Other defaults...
    }
);
  1. Use error handling:
try {
    const purchasedGraph = new PurchasedGraph(/* ... */);
    await purchasedGraph.waitForInitialization();

    const result = await graph.invoke(input);
} catch (error) {
    if (error.message.includes('not purchased')) {
        // Handle purchase required
    } else if (error.message.includes('Invalid API key')) {
        // Handle authentication error
    }
    // Handle other errors
}
  1. Consider streaming for real-time updates:
for await (const chunk of await graph.stream(input, {
    subgraphs: true  // Get detailed execution info
})) {
    if ('nodeIds' in chunk) {
        // Handle state update
        console.log('Active nodes:', chunk.nodeIds);
    } else {
        // Handle message
        console.log('Message:', chunk.content);
    }
}

Environment Variables

The SDK requires the following environment variable:

Error Handling

The SDK provides several error types:

  • AuthenticationError: Issues with API key or authentication
  • GraphError: Problems with graph execution or access
  • APIError: General API communication issues

Best Practices

  1. Always wait for initialization:
await purchasedGraph.waitForInitialization();
  1. Use try-catch blocks for error handling:
try {
    const result = await graph.invoke(input);
} catch (error) {
    if (error instanceof AuthenticationError) {
        // Handle authentication issues
    }
    // Handle other errors
}
  1. Use streaming for real-time responses:
for await (const chunk of await graph.stream(input)) {
// Process chunks as they arrive
}

Support

License

This project is licensed under the MIT License - see the LICENSE file for details.

Examples

Basic Usage

The simplest way to use the SDK for a chat application:

import { LmsystemsClient } from 'lmsystems';

async function main() {
    // Initialize and setup client
    const client = new LmsystemsClient({
        graphName: 'github-agentz-6',
        apiKey: process.env.LMSYSTEMS_API_KEY
    });
    await client.setup();

    try {
        // Create a thread for the conversation
        const thread = await client.createThread();

        // Stream a response
        const stream = client.stream(
            {
                messages: [{ role: 'user', content: "What's this repo about?" }],
                repo_url: 'https://github.com/example/repo'
            },
            { threadId: thread.thread_id }
        );

        // Process stream response
        for await (const chunk of stream) {
            console.log(chunk);
        }

        // Clean up when done
        await client.deleteThread(thread.thread_id);
    } catch (error) {
        console.error('Error:', error);
    }
}

Advanced Usage

Thread Management

// Create a thread with metadata
const thread = await client.createThread({
    metadata: {
        conversation_id: 'chat-123',
        project: 'github-analysis'
    }
});

// Get thread history
const history = await client.getThreadHistory(thread.thread_id, {
    limit: 10,
    metadata: { source: 'chat' }
});

// Copy a thread for branching conversations
const branchedThread = await client.copyThread(thread.thread_id);

// Get current thread state
const state = await client.getThreadState(thread.thread_id);

Advanced Streaming

const stream = client.stream(
    {
        messages: [{ role: 'user', content: 'Hello' }],
        // Add any graph-specific parameters
        model: 'gpt-4',
        temperature: 0.7
    },
    {
        threadId: thread.thread_id,
        streamMode: ["values", "messages", "debug"],
        streamSubgraphs: true,
        metadata: { source: "custom" },
        multitaskStrategy: "interrupt",
        config: {
            tags: ["custom-tag"],
            recursion_limit: 50,
            configurable: {
                custom_option: "value"
            }
        }
    }
);

for await (const chunk of stream) {
    console.log('Response:', chunk);
}

Multi-turn Conversations

// First interaction
const firstResponse = client.stream(
    {
        messages: [{ role: 'user', content: "What's the main technology stack?" }],
        repo_url: 'https://github.com/example/repo'
    },
    { threadId: thread.thread_id }
);
for await (const chunk of firstResponse) {
    console.log('First Response:', chunk);
}

// Follow-up question using the same thread
const followUpResponse = client.stream(
    {
        messages: [{ role: 'user', content: "What are the main features?" }],
        repo_url: 'https://github.com/example/repo'
    },
    { threadId: thread.thread_id }
);
for await (const chunk of followUpResponse) {
    console.log('Follow-up Response:', chunk);
}

Error Handling

try {
    await client.setup();
    const thread = await client.createThread();

    const stream = client.stream(
        { messages: [{ role: 'user', content: 'Hello' }] },
        { threadId: thread.thread_id }
    );

    for await (const chunk of stream) {
        console.log(chunk);
    }
} catch (error) {
    if (error instanceof AuthenticationError) {
        console.error('Invalid API key or authentication failed');
    } else if (error instanceof GraphError) {
        console.error('Graph not found or not purchased');
    } else if (error instanceof APIError) {
        console.error('API communication error:', error.message);
    } else {
        console.error('Unexpected error:', error);
    }
}

TypeScript Support

The SDK provides full TypeScript support for better type safety and IDE assistance:

import {
    LmsystemsClient,
    StreamOptions,
    ThreadOptions,
    FilteredThread,
    FilteredThreadState
} from 'lmsystems';

// Type-safe thread options
const threadOptions: ThreadOptions = {
    metadata: {
        conversation_id: 'chat-123'
    }
};

// Type-safe stream options
const streamOptions: StreamOptions = {
    streamMode: ["values", "messages"],
    metadata: { source: "chat" }
};

// Type-safe response handling
const thread: FilteredThread = await client.createThread(threadOptions);
const state: FilteredThreadState = await client.getThreadState(thread.thread_id);
0.2.0

9 months ago

0.1.9

10 months ago

0.1.8

10 months ago

0.1.7

10 months ago

0.1.6

10 months ago

0.1.5

10 months ago

0.1.4

10 months ago

0.1.3

11 months ago

0.1.2

11 months ago

0.1.1

11 months ago

0.1.0

11 months ago

0.0.4

11 months ago