2.48.0 • Published 10 months ago

@memberjunction/ai-anthropic v2.48.0

Weekly downloads
-
License
ISC
Repository
-
Last release
10 months ago

@memberjunction/ai-anthropic

A comprehensive wrapper for Anthropic's AI models (Claude) that provides a standardized interface within the MemberJunction AI framework. This package enables seamless integration with Claude models while maintaining consistency with MemberJunction's AI abstraction layer.

Features

  • Seamless Integration: Direct integration with Anthropic's Claude models
  • Standardized Interface: Implements MemberJunction's BaseLLM abstract class
  • Streaming Support: Full support for streaming responses
  • Advanced Caching: Ephemeral caching support for improved performance
  • Thinking/Reasoning: Support for Claude's thinking/reasoning capabilities with configurable token budgets
  • Message Formatting: Automatic handling of message format conversions and role mappings
  • Error Handling: Comprehensive error handling with detailed error reporting
  • Token Usage Tracking: Detailed token usage tracking including cached tokens
  • Multiple Models: Support for all Claude model variants (Opus, Sonnet, Haiku, etc.)

Installation

npm install @memberjunction/ai-anthropic

Requirements

  • Node.js 16+
  • An Anthropic API key
  • MemberJunction Core libraries (@memberjunction/ai, @memberjunction/global)

Usage

Basic Setup

import { AnthropicLLM } from '@memberjunction/ai-anthropic';

// Initialize with your API key
const anthropicLLM = new AnthropicLLM('your-anthropic-api-key');

Chat Completion

import { ChatParams } from '@memberjunction/ai';

// Create chat parameters
const chatParams: ChatParams = {
  model: 'claude-3-opus-20240229',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello, can you help me understand how AI works?' }
  ],
  maxOutputTokens: 1000,
  temperature: 0.7,
  enableCaching: true // Enable ephemeral caching
};

// Get a response
try {
  const response = await anthropicLLM.ChatCompletion(chatParams);
  if (response.success) {
    console.log('AI Response:', response.data.choices[0].message.content);
    console.log('Token Usage:', response.data.usage);
    
    // Check cache info if available
    if (response.cacheInfo) {
      console.log('Cache Hit:', response.cacheInfo.cacheHit);
      console.log('Cached Tokens:', response.cacheInfo.cachedTokenCount);
    }
  } else {
    console.error('Error:', response.errorMessage);
  }
} catch (error) {
  console.error('Exception:', error);
}

Streaming Chat Completion

const streamParams: ChatParams = {
  model: 'claude-3-sonnet-20240229',
  messages: [
    { role: 'user', content: 'Write a short story about AI' }
  ],
  maxOutputTokens: 2000,
  streaming: true,
  streamCallback: (content: string) => {
    // Handle streaming chunks as they arrive
    process.stdout.write(content);
  }
};

const response = await anthropicLLM.ChatCompletion(streamParams);

Using Thinking/Reasoning Models

const reasoningParams: ChatParams = {
  model: 'claude-3-opus-20240229',
  messages: [
    { role: 'user', content: 'Solve this complex math problem: ...' }
  ],
  effortLevel: 'high',
  reasoningBudgetTokens: 5000, // Allow up to 5000 tokens for reasoning
  maxOutputTokens: 2000
};

const response = await anthropicLLM.ChatCompletion(reasoningParams);

Text Summarization

import { SummarizeParams } from '@memberjunction/ai';

const text = `Long text that you want to summarize...`;

const summarizeParams: SummarizeParams = {
  text: text,
  model: 'claude-2.1',
  temperature: 0.3,
  maxWords: 100
};

const summary = await anthropicLLM.SummarizeText(summarizeParams);
console.log('Summary:', summary.summary);

Direct Access to Anthropic Client

// Access the underlying Anthropic client for advanced usage
const anthropicClient = anthropicLLM.AnthropicClient;

// Use the client directly if needed for features not exposed by the wrapper
const customResponse = await anthropicClient.messages.create({
  model: 'claude-3-haiku-20240307',
  system: 'You are a helpful assistant.',
  messages: [{ role: 'user', content: 'Hello!' }],
  max_tokens: 500
});

API Reference

AnthropicLLM Class

Extends BaseLLM from @memberjunction/ai to provide Anthropic-specific functionality.

Constructor

new AnthropicLLM(apiKey: string)

Creates a new instance of the AnthropicLLM wrapper.

Parameters:

  • apiKey: Your Anthropic API key

Properties

  • AnthropicClient: (read-only) Returns the underlying Anthropic SDK client instance
  • SupportsStreaming: (read-only) Returns true - Anthropic supports streaming

Methods

ChatCompletion(params: ChatParams): Promise

Performs a chat completion using Claude models.

Parameters:

  • params: ChatParams object containing:
    • model: The model to use (e.g., 'claude-3-opus-20240229')
    • messages: Array of chat messages
    • maxOutputTokens: Maximum tokens to generate (default: 64000)
    • temperature: Temperature for randomness (0-1)
    • enableCaching: Enable ephemeral caching (default: true)
    • streaming: Enable streaming responses
    • streamCallback: Callback for streaming chunks
    • effortLevel: Enable thinking/reasoning mode
    • reasoningBudgetTokens: Token budget for reasoning (min: 1)

Returns: ChatResult with response data, usage info, and timing metrics

SummarizeText(params: SummarizeParams): Promise

Summarizes text using Claude's completion API.

Parameters:

  • params: SummarizeParams object containing:
    • text: Text to summarize
    • model: Model to use (default: 'claude-2.1')
    • temperature: Temperature setting
    • maxWords: Maximum words in summary

Returns: SummarizeResult with the generated summary

ConvertMJToAnthropicRole(role: ChatMessageRole): 'assistant' | 'user'

Converts MemberJunction chat roles to Anthropic-compatible roles.

Parameters:

  • role: MemberJunction role ('system', 'user', 'assistant')

Returns: Anthropic role ('assistant' or 'user')

Advanced Features

Caching

The wrapper supports Anthropic's ephemeral caching feature, which can significantly improve performance for repeated queries:

const params: ChatParams = {
  model: 'claude-3-opus-20240229',
  messages: [...],
  enableCaching: true // Caching is enabled by default
};

Caching is automatically applied to the last message in the conversation for optimal performance.

Message Format Handling

The wrapper automatically handles:

  • Conversion of system messages to Anthropic's format
  • Prevention of consecutive messages with the same role
  • Proper formatting of content blocks
  • Automatic insertion of filler messages when needed

Error Handling

The wrapper provides comprehensive error information:

try {
  const response = await anthropicLLM.ChatCompletion(params);
  if (!response.success) {
    console.error('Error:', response.errorMessage);
    console.error('Status:', response.statusText);
    console.error('Time Elapsed:', response.timeElapsed, 'ms');
    console.error('Exception Details:', response.exception);
  }
} catch (error) {
  console.error('Exception occurred:', error);
}

Integration with MemberJunction

This package is designed to work seamlessly with the MemberJunction AI framework:

  1. Consistent Interface: Implements the same methods as other AI providers
  2. Type Safety: Full TypeScript support with proper typing
  3. Global Registration: Automatically registers with MemberJunction's class factory using @RegisterClass decorator
  4. Standardized Results: Returns standardized result objects compatible with MemberJunction's AI abstraction

Dependencies

  • @anthropic-ai/sdk (^0.50.4): Official Anthropic SDK
  • @memberjunction/ai (^2.43.0): MemberJunction AI core framework
  • @memberjunction/global (^2.43.0): MemberJunction global utilities

Development

Building

npm run build

Running in Development

npm start

License

ISC

Support

For issues and questions:

2.27.1

1 year ago

2.23.2

1 year ago

2.46.0

11 months ago

2.23.1

1 year ago

2.27.0

1 year ago

2.34.0

12 months ago

2.30.0

1 year ago

2.19.4

1 year ago

2.19.5

1 year ago

2.19.2

1 year ago

2.19.3

1 year ago

2.19.0

1 year ago

2.19.1

1 year ago

2.15.2

1 year ago

2.34.2

12 months ago

2.15.0

1 year ago

2.34.1

12 months ago

2.38.0

11 months ago

2.45.0

11 months ago

2.22.1

1 year ago

2.22.0

1 year ago

2.41.0

11 months ago

2.22.2

1 year ago

2.26.1

1 year ago

2.26.0

1 year ago

2.33.0

12 months ago

2.18.3

1 year ago

2.18.1

1 year ago

2.18.2

1 year ago

2.18.0

1 year ago

2.37.1

12 months ago

2.37.0

12 months ago

2.14.0

1 year ago

2.21.0

1 year ago

2.44.0

11 months ago

2.40.0

11 months ago

2.29.0

1 year ago

2.29.2

1 year ago

2.29.1

1 year ago

2.25.0

1 year ago

2.48.0

10 months ago

2.32.0

1 year ago

2.32.2

1 year ago

2.32.1

1 year ago

2.17.0

1 year ago

2.13.4

1 year ago

2.36.0

12 months ago

2.13.2

1 year ago

2.13.3

1 year ago

2.13.0

1 year ago

2.36.1

12 months ago

2.13.1

1 year ago

2.43.0

11 months ago

2.20.2

1 year ago

2.20.3

1 year ago

2.20.0

1 year ago

2.20.1

1 year ago

2.28.0

1 year ago

2.47.0

11 months ago

2.24.1

1 year ago

2.24.0

1 year ago

2.31.0

1 year ago

2.12.0

1 year ago

2.39.0

11 months ago

2.16.1

1 year ago

2.35.1

12 months ago

2.35.0

12 months ago

2.16.0

1 year ago

2.42.1

11 months ago

2.42.0

11 months ago

2.23.0

1 year ago

2.11.0

1 year ago

2.10.0

1 year ago

2.9.0

1 year ago

2.8.0

2 years ago

2.6.1

2 years ago

2.6.0

2 years ago

2.7.0

2 years ago

2.5.2

2 years ago

2.7.1

2 years ago

1.8.1

2 years ago

1.8.0

2 years ago

1.6.1

2 years ago

1.6.0

2 years ago

1.4.1

2 years ago

1.4.0

2 years ago

2.2.1

2 years ago

2.2.0

2 years ago

2.4.1

2 years ago

2.4.0

2 years ago

2.0.0

2 years ago

1.7.1

2 years ago

1.5.3

2 years ago

1.7.0

2 years ago

1.5.2

2 years ago

1.5.1

2 years ago

1.3.3

2 years ago

1.5.0

2 years ago

1.3.2

2 years ago

1.3.1

2 years ago

1.3.0

2 years ago

2.3.0

2 years ago

2.1.2

2 years ago

2.1.1

2 years ago

2.5.0

2 years ago

2.3.2

2 years ago

2.1.4

2 years ago

2.3.1

2 years ago

2.1.3

2 years ago

2.5.1

2 years ago

2.3.3

2 years ago

2.1.5

2 years ago

1.2.2

2 years ago

1.2.1

2 years ago

1.2.0

2 years ago

1.1.1

2 years ago

1.1.0

2 years ago

1.1.3

2 years ago

1.1.2

2 years ago

1.0.11

2 years ago

1.0.9

2 years ago

1.0.8

2 years ago

1.0.7

2 years ago

1.0.8-next.6

2 years ago

1.0.8-next.5

2 years ago

1.0.8-next.4

2 years ago

1.0.8-next.3

2 years ago

1.0.8-next.2

2 years ago

1.0.8-beta.0

2 years ago

1.0.8-next.1

2 years ago

1.0.8-next.0

2 years ago

1.0.7-next.0

2 years ago

1.0.2

2 years ago

1.0.6

2 years ago

1.0.4

2 years ago

1.0.3

2 years ago

1.0.1

2 years ago

1.0.0

2 years ago

0.9.34

2 years ago

0.9.32

2 years ago

0.9.33

2 years ago

0.9.31

2 years ago

0.9.30

2 years ago

0.9.27

2 years ago

0.9.28

2 years ago

0.9.29

2 years ago

0.9.25

2 years ago

0.9.23

2 years ago

0.9.26

2 years ago

0.9.22

2 years ago

0.9.21

2 years ago

0.9.20

2 years ago

0.9.13

2 years ago

0.9.14

2 years ago

0.9.15

2 years ago

0.9.16

2 years ago

0.9.17

2 years ago

0.9.18

2 years ago

0.9.19

2 years ago

0.9.12

2 years ago

0.9.11

2 years ago

0.9.10

2 years ago

0.9.9

2 years ago