2.48.0 • Published 9 months ago

@memberjunction/ai-gemini v2.48.0

Weekly downloads
-
License
ISC
Repository
-
Last release
9 months ago

@memberjunction/ai-gemini

A comprehensive wrapper for Google's Gemini AI models that seamlessly integrates with the MemberJunction AI framework, providing access to Google's powerful generative AI capabilities.

Features

  • Google Gemini Integration: Connect to Google's state-of-the-art Gemini models using the official @google/genai SDK
  • Standardized Interface: Implements MemberJunction's BaseLLM abstract class
  • Streaming Support: Full support for streaming responses with real-time token generation
  • Multimodal Support: Handle text, images, audio, video, and file content
  • Message Formatting: Automatic conversion between MemberJunction and Gemini message formats
  • Effort Level Support: Leverage Gemini's reasoning mode for higher-quality responses
  • Error Handling: Robust error handling with detailed reporting
  • Chat Support: Full support for chat-based interactions with conversation history
  • Temperature Control: Fine-tune generation creativity
  • Response Format Control: Request specific response MIME types

Installation

npm install @memberjunction/ai-gemini

Requirements

  • Node.js 16+
  • A Google AI Studio API key
  • MemberJunction Core libraries

Usage

Basic Setup

import { GeminiLLM } from '@memberjunction/ai-gemini';

// Initialize with your Google AI API key
const geminiLLM = new GeminiLLM('your-google-ai-api-key');

Chat Completion

import { ChatParams } from '@memberjunction/ai';

// Create chat parameters
const chatParams: ChatParams = {
  model: 'gemini-pro',  // or 'gemini-pro-vision' for multimodal
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What are the key features of the Gemini AI model?' }
  ],
  temperature: 0.7,
  maxOutputTokens: 1000
};

// Get a response
try {
  const response = await geminiLLM.ChatCompletion(chatParams);
  if (response.success) {
    console.log('Response:', response.data.choices[0].message.content);
    console.log('Time elapsed:', response.timeElapsed, 'ms');
  } else {
    console.error('Error:', response.errorMessage);
  }
} catch (error) {
  console.error('Exception:', error);
}

Streaming Chat Completion

import { StreamingChatCallbacks } from '@memberjunction/ai';

// Define streaming callbacks
const streamCallbacks: StreamingChatCallbacks = {
  onToken: (token: string) => {
    process.stdout.write(token); // Print each token as it arrives
  },
  onComplete: (fullResponse: string) => {
    console.log('\n\nComplete response received');
  },
  onError: (error: Error) => {
    console.error('Streaming error:', error);
  }
};

// Use streaming
const streamParams: ChatParams = {
  model: 'gemini-pro',
  messages: [
    { role: 'user', content: 'Write a short story about a robot.' }
  ],
  streaming: true,
  streamingCallbacks: streamCallbacks
};

await geminiLLM.ChatCompletion(streamParams);

Multimodal Content

import { ChatMessageContent } from '@memberjunction/ai';

// Create multimodal content
const multimodalContent: ChatMessageContent = [
  { type: 'text', content: 'What do you see in this image?' },
  { type: 'image_url', content: 'base64_encoded_image_data_here' }
];

const multimodalParams: ChatParams = {
  model: 'gemini-pro-vision',
  messages: [
    { role: 'user', content: multimodalContent }
  ]
};

const response = await geminiLLM.ChatCompletion(multimodalParams);

Enhanced Reasoning with Effort Level

// Use effort level to enable Gemini's full reasoning mode
const reasoningParams: ChatParams = {
  model: 'gemini-pro',
  messages: [
    { role: 'user', content: 'Solve this complex logic puzzle...' }
  ],
  effortLevel: 'high' // Enables full reasoning mode
};

const response = await geminiLLM.ChatCompletion(reasoningParams);

Direct Access to Gemini Client

// Access the underlying GoogleGenAI client for advanced usage
const geminiClient = geminiLLM.GeminiClient;

// Use the client directly if needed for custom operations
const chat = geminiClient.chats.create({
  model: 'gemini-pro',
  history: []
});

Supported Models

Google Gemini provides several models with different capabilities:

  • gemini-pro: General-purpose text model
  • gemini-pro-vision: Multimodal model that can process images and text
  • gemini-ultra: Google's most advanced model (when available)

Check the Google AI documentation for the latest list of supported models.

API Reference

GeminiLLM Class

A class that extends BaseLLM to provide Google Gemini-specific functionality.

Constructor

new GeminiLLM(apiKey: string)

Creates a new instance of the Gemini LLM wrapper.

Parameters:

  • apiKey: Your Google AI Studio API key

Properties

  • GeminiClient: (read-only) Returns the underlying GoogleGenAI client instance
  • SupportsStreaming: (read-only) Returns true - Gemini supports streaming responses

Methods

ChatCompletion(params: ChatParams): Promise

Perform a chat completion with Gemini models.

Parameters:

  • params: Chat parameters including model, messages, temperature, etc.

Returns:

  • Promise resolving to a ChatResult with the model's response
SummarizeText(params: SummarizeParams): Promise

Not implemented yet - will throw an error if called.

ClassifyText(params: ClassifyParams): Promise

Not implemented yet - will throw an error if called.

Static Methods

MapMJMessageToGeminiHistoryEntry(message: ChatMessage): Content

Converts a MemberJunction ChatMessage to Gemini's Content format.

Parameters:

  • message: MemberJunction ChatMessage object

Returns:

  • Gemini Content object with proper role mapping
MapMJContentToGeminiParts(content: ChatMessageContent): Array

Converts MemberJunction message content to Gemini Parts array.

Parameters:

  • content: String or array of content parts

Returns:

  • Array of Gemini Part objects

Response Format Control

Control the format of Gemini responses using the responseFormat parameter:

const params: ChatParams = {
  // ...other parameters
  responseFormat: 'text/plain',  // Regular text response
};

// For structured data
const jsonParams: ChatParams = {
  // ...other parameters
  responseFormat: 'application/json',  // Request JSON response
};

Error Handling

The wrapper provides detailed error information:

try {
  const response = await geminiLLM.ChatCompletion(params);
  if (!response.success) {
    console.error('Error:', response.errorMessage);
    console.error('Status:', response.statusText);
    console.error('Exception:', response.exception);
  }
} catch (error) {
  console.error('Exception occurred:', error);
}

Message Handling

The wrapper handles proper message formatting and role conversion between MemberJunction's format and Google Gemini's expected format:

  • MemberJunction's system and user roles are converted to Gemini's user role
  • MemberJunction's assistant role is converted to Gemini's model role
  • Messages are automatically spaced to ensure alternating roles as required by Gemini
  • Multimodal content is properly converted with appropriate MIME types

Content Type Support

The wrapper supports various content types with automatic MIME type mapping:

  • Text: Standard text messages
  • Images: image_url type → image/jpeg MIME type
  • Audio: audio_url type → audio/mpeg MIME type
  • Video: video_url type → video/mp4 MIME type
  • Files: file_url type → application/octet-stream MIME type

Integration with MemberJunction

This package is designed to work seamlessly with the MemberJunction AI framework:

import { AIEngine } from '@memberjunction/ai';
import { GeminiLLM } from '@memberjunction/ai-gemini';

// Register the Gemini provider with the AI engine
const aiEngine = new AIEngine();
const geminiProvider = new GeminiLLM('your-api-key');

// Use through the AI engine's unified interface
const result = await aiEngine.ChatCompletion({
  provider: 'GeminiLLM',
  model: 'gemini-pro',
  messages: [/* ... */]
});

Performance Considerations

  • Streaming: Use streaming for long responses to improve perceived performance
  • Effort Level: Use the effortLevel parameter judiciously as it increases latency and cost
  • Model Selection: Choose the appropriate model based on your needs (text-only vs multimodal)
  • Message Spacing: The wrapper automatically handles message spacing, adding minimal overhead

Limitations

Currently, the wrapper implements:

  • ✅ Chat completion functionality (streaming and non-streaming)
  • ✅ Multimodal content support
  • ✅ Effort level configuration for enhanced reasoning
  • SummarizeText functionality (not implemented)
  • ClassifyText functionality (not implemented)
  • ❌ Detailed token usage reporting (Gemini doesn't provide this)

Dependencies

  • @google/genai (v0.14.0): Official Google GenAI SDK
  • @memberjunction/ai (v2.43.0): MemberJunction AI core framework
  • @memberjunction/global (v2.43.0): MemberJunction global utilities

Development

Building

npm run build

Testing

Tests are not currently implemented. To add tests:

npm test

License

ISC

Contributing

For bug reports, feature requests, or contributions, please visit the MemberJunction repository.

2.27.1

12 months ago

2.23.2

1 year ago

2.46.0

9 months ago

2.23.1

1 year ago

2.27.0

12 months ago

2.34.0

10 months ago

2.30.0

11 months ago

2.19.4

1 year ago

2.19.5

1 year ago

2.19.2

1 year ago

2.19.3

1 year ago

2.19.0

1 year ago

2.19.1

1 year ago

2.15.2

1 year ago

2.34.2

10 months ago

2.15.0

1 year ago

2.34.1

10 months ago

2.38.0

10 months ago

2.45.0

9 months ago

2.22.1

1 year ago

2.22.0

1 year ago

2.41.0

9 months ago

2.22.2

1 year ago

2.26.1

1 year ago

2.26.0

1 year ago

2.33.0

10 months ago

2.18.3

1 year ago

2.18.1

1 year ago

2.18.2

1 year ago

2.18.0

1 year ago

2.37.1

10 months ago

2.37.0

10 months ago

2.14.0

1 year ago

2.21.0

1 year ago

2.44.0

9 months ago

2.40.0

9 months ago

2.29.0

12 months ago

2.29.2

12 months ago

2.29.1

12 months ago

2.25.0

1 year ago

2.48.0

9 months ago

2.32.0

11 months ago

2.32.2

11 months ago

2.32.1

11 months ago

2.17.0

1 year ago

2.13.4

1 year ago

2.36.0

10 months ago

2.13.2

1 year ago

2.13.3

1 year ago

2.13.0

1 year ago

2.36.1

10 months ago

2.13.1

1 year ago

2.43.0

9 months ago

2.20.2

1 year ago

2.20.3

1 year ago

2.20.0

1 year ago

2.20.1

1 year ago

2.28.0

12 months ago

2.47.0

9 months ago

2.24.1

1 year ago

2.24.0

1 year ago

2.31.0

11 months ago

2.12.0

1 year ago

2.39.0

10 months ago

2.16.1

1 year ago

2.35.1

10 months ago

2.35.0

10 months ago

2.16.0

1 year ago

2.42.1

9 months ago

2.42.0

9 months ago

2.23.0

1 year ago

2.11.0

1 year ago

2.10.0

1 year ago

2.9.0

1 year ago

2.8.0

1 year ago

2.6.1

1 year ago

2.6.0

1 year ago

2.7.0

1 year ago

2.5.2

1 year ago

2.7.1

1 year ago

1.8.1

2 years ago

1.8.0

2 years ago

1.6.1

2 years ago

1.6.0

2 years ago

1.4.1

2 years ago

1.4.0

2 years ago

2.2.1

2 years ago

2.2.0

2 years ago

2.4.1

1 year ago

2.4.0

1 year ago

2.0.0

2 years ago

1.7.1

2 years ago

1.5.3

2 years ago

1.7.0

2 years ago

1.5.2

2 years ago

1.5.1

2 years ago

1.3.3

2 years ago

1.5.0

2 years ago

1.3.2

2 years ago

1.3.1

2 years ago

1.3.0

2 years ago

2.3.0

2 years ago

2.1.2

2 years ago

2.1.1

2 years ago

2.5.0

1 year ago

2.3.2

1 year ago

2.1.4

2 years ago

2.3.1

2 years ago

2.1.3

2 years ago

2.5.1

1 year ago

2.3.3

1 year ago

2.1.5

2 years ago

1.2.2

2 years ago

1.2.1

2 years ago

1.2.0

2 years ago

1.1.1

2 years ago

1.1.0

2 years ago

1.1.3

2 years ago

1.1.2

2 years ago

1.0.11

2 years ago

1.0.9

2 years ago

1.0.8

2 years ago

1.0.7

2 years ago

1.0.8-next.6

2 years ago

1.0.8-next.5

2 years ago

1.0.8-next.4

2 years ago

1.0.8-next.3

2 years ago

1.0.8-next.2

2 years ago

1.0.8-beta.0

2 years ago

1.0.8-next.1

2 years ago

1.0.8-next.0

2 years ago

1.0.7-next.0

2 years ago

1.0.2

2 years ago

1.0.6

2 years ago

1.0.4

2 years ago

1.0.3

2 years ago

1.0.1

2 years ago

1.0.0

2 years ago

0.9.35

2 years ago

0.9.34

2 years ago

0.9.33

2 years ago

0.9.32

2 years ago

0.9.30

2 years ago

0.9.31

2 years ago

0.9.28

2 years ago

0.9.29

2 years ago

0.9.27

2 years ago

0.9.26

2 years ago

0.9.24

2 years ago

0.9.20

2 years ago

0.9.21

2 years ago

0.9.22

2 years ago

0.9.18

2 years ago

0.9.19

2 years ago

0.9.17

2 years ago

0.9.16

2 years ago

0.9.14

2 years ago

0.9.15

2 years ago

0.9.13

2 years ago

0.9.12

2 years ago

0.9.11

2 years ago

0.9.10

2 years ago

0.9.8

2 years ago

0.9.7

2 years ago

0.9.6

2 years ago

0.9.5

2 years ago

0.9.4

2 years ago