1.0.4 • Published 9 months ago

neoapi-sdk v1.0.4

Weekly downloads
-
License
Apache-2.0
Repository
github
Last release
9 months ago

NeoAPI SDK

Integrate neoapi.ai LLM Analytics with your LLM pipelines.

Features

  • Asynchronous and Synchronous Clients: Choose between async (NeoApiClientAsync) and sync (NeoApiClientSync) clients
  • Batching: Automatically batches LLM outputs to optimize API calls
  • Retry Logic: Robust retry mechanisms for failed API requests
  • Dynamic Adjustment: Automatically adjusts batch sizes and flush intervals based on load
  • Prompt Tracking: Track both input prompts and output responses
  • Flexible Integration: Multiple ways to track LLM interactions

Quick Start

# Install the package
npm install neoapi-sdk

# Set your API key
export NEOAPI_API_KEY='your_api_key'
import { NeoApiClientAsync } from 'neoapi-sdk';

// Initialize and start the client
const client = new NeoApiClientAsync({});
client.start();

// Track LLM outputs with prompts
await client.track({
  text: 'LLM response',
  prompt: 'What is the capital of France?',
  timestamp: Date.now(),
  project: 'my_project'
});

// Stop when done
client.stop();

Usage

Configuration Options

  • Environment Variables:

    • NEOAPI_API_KEY: Your NeoAPI API key
    • NEOAPI_API_URL: (Optional) API endpoint URL. Defaults to https://api.neoapi.ai
  • Client Options:

    {
      apiKey?: string;              // API key (overrides env var)
      apiUrl?: string;              // API URL (overrides env var)
      batchSize?: number;           // Number of items per batch
      flushInterval?: number;       // Interval between flushes
      maxRetries?: number;          // Max retry attempts
      checkFrequency?: number;      // Frequency of health checks
    }

Integration Examples

With OpenAI

import { Configuration, OpenAIApi } from 'openai';
import { NeoApiClientAsync } from 'neoapi-sdk';

const client = new NeoApiClientAsync({});
client.start();

class ChatService {
  private openai: OpenAIApi;

  constructor() {
    this.openai = new OpenAIApi(
      new Configuration({ apiKey: process.env.OPENAI_API_KEY })
    );
  }

  async chat(prompt: string) {
    const response = await this.openai.createChatCompletion({
      model: 'gpt-4o-mini',
      messages: [{ role: 'user', content: prompt }]
    });
    
    const result = response.data.choices[0].message?.content;
    
    // Track both prompt and response
    await client.track({
      text: result,
      prompt: prompt,
      timestamp: Date.now(),
      project: 'chatbot',
      group: 'customer_service'
    });

    return result;
  }
}

Batch Processing

const prompts = [
  'What is the capital of France?',
  'Explain quantum computing.',
];

const results = client.batchProcess(prompts, {
  needAnalysisResponse: true,
  project: 'science_project',
  group: 'research_group',
  includePrompts: true  // Include original prompts in tracking
});

Tracking Options

The track method accepts an LLMOutput object with these fields:

{
  text: string;              // Required: The LLM output text
  prompt?: string;           // Optional: The input prompt
  timestamp: number;         // Required: Timestamp in milliseconds
  project?: string;          // Optional: Project identifier
  group?: string;           // Optional: Group identifier
  needAnalysisResponse?: boolean;  // Optional: Get analysis response
  formatJsonOutput?: boolean;      // Optional: Format JSON output
  metadata?: Record<string, any>;  // Optional: Additional metadata
  saveText?: boolean;             // Optional: Save text content
}

Best Practices

Error Handling

try {
  await client.track({
    text: result,
    prompt: userPrompt,
    timestamp: Date.now()
  });
} catch (error) {
  if (error instanceof NeoApiError) {
    logger.error('API Error:', error.message);
  } else {
    logger.error('Unexpected error:', error);
  }
}

Client Lifecycle

// Initialize at startup
const client = new NeoApiClientAsync({});
client.start();

// Handle shutdown
process.on('SIGTERM', async () => {
  await client.flush();
  client.stop();
  process.exit(0);
});

API Reference

NeoApiClientAsync & NeoApiClientSync

Both clients share similar methods:

  • start(): Starts the client
  • stop(): Stops the client
  • track(llmOutput: LLMOutput): Tracks an LLM output with optional prompt
  • flush(): Flushes the current queue
  • batchProcess(prompts: string[], options?: BatchOptions): Processes multiple prompts

The main difference is that Async client methods return Promises.

Batch Processing Options

{
  needAnalysisResponse?: boolean;
  formatJsonOutput?: boolean;
  project?: string;
  group?: string;
  analysisSlug?: string;
  metadata?: Record<string, any>;
  saveText?: boolean;
  includePrompts?: boolean;  // Include original prompts in tracking
}
1.0.4

9 months ago

1.0.3

9 months ago

1.0.2

9 months ago

1.0.1

9 months ago