2.48.0 โ€ข Published 4 months ago

@memberjunction/ai v2.48.0

Weekly downloads
-
License
ISC
Repository
-
Last release
4 months ago

@memberjunction/ai

The MemberJunction AI Core package provides a comprehensive abstraction layer for working with various AI models (LLMs, Video and Audio Generation, Text-To-Speech (TTS), embedding models, etc.) in a provider-agnostic way, allowing your application to easily switch between different AI providers without refactoring your code.

Overview

This package serves as the foundation for all AI capabilities in the MemberJunction ecosystem. It defines abstract base classes and interfaces that are implemented by provider-specific packages, enabling seamless integration with various AI services while maintaining a consistent API.

Standalone Usage

IMPORTANT: This package can be used completely independently from the rest of the MemberJunction framework:

  • Zero database setup required
  • No environment variables or other settings expected (you are responsible for this and pass in API keys to the constructors)
  • Works in any TypeScript environment that can safely make API calls (e.g. don't use in browsers)
  • Perfect for server-side applications, backend services, and CLI tools

The @memberjunction/ai package and all provider packages in @memberjunction/ai-* are designed to be lightweight, standalone modules that can be used in any TypeScript project.

Features

  • Provider Abstraction: Work with AI models without tightly coupling to specific vendor APIs
  • Runtime Optionality: Switch between AI providers at runtime based on configuration
  • Base Classes: Abstract base classes for different AI model types (LLMs, embedding models, audio, video, etc.)
  • Standard Interfaces: Consistent interfaces for common AI operations like chat, summarization, and classification
  • Streaming Support: Stream responses from supported LLM providers for real-time UIs
  • Parallel Processing: Execute multiple chat completions in parallel with progress callbacks
  • Multi-modal Support: Handle text, images, videos, audio, and files in chat messages
  • Type Definitions: Comprehensive TypeScript type definitions for all AI operations
  • Error Handling: Standardized error handling and reporting across all providers
  • Token Usage Tracking: Consistent tracking of token usage across providers
  • Response Format Control: Specify output formats (Text, Markdown, JSON, or provider-specific)
  • Additional Settings: Provider-specific configuration through a flexible settings system

Installation

npm install @memberjunction/ai

Then install one or more provider packages:

npm install @memberjunction/ai-openai
npm install @memberjunction/ai-anthropic
npm install @memberjunction/ai-mistral
# etc.

Usage

Provider-Agnostic Usage (Recommended)

For maximum flexibility, use the class factory approach to select the provider at runtime:

import { BaseLLM, ChatParams } from '@memberjunction/ai';
import { MJGlobal } from '@memberjunction/global';

// Required to stop tree-shaking of the MistralLLM - since there's no static code path to this class when using Class Factory pattern, you need this to prevent some bundlers from tree-shaking optimization on this class.
import { LoadMistralLLM } from '@memberjunction/ai-mistral';
LoadMistralLLM(); 

// Get an implementation of BaseLLM by provider name
const llm = MJGlobal.Instance.ClassFactory.CreateInstance<BaseLLM>(
  BaseLLM, 
  'MistralLLM',  // Provider class name
  'your-api-key'
);

// Use the abstracted interface
const params: ChatParams = {
  model: 'mistral-large-latest',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is AI abstraction?' }
  ]
};

const result = await llm.ChatCompletion(params);

Environment-Based Provider Selection

Use environment variables or configuration to select the provider:

import { BaseLLM } from '@memberjunction/ai';
import { MJGlobal } from '@memberjunction/global';
import dotenv from 'dotenv';

// Required to stop tree-shaking of the OpenAILLM - since there's no static code path to this class when using Class Factory pattern, you need this to prevent some bundlers from tree-shaking optimization on this class.
import { LoadOpenAILLM } from '@memberjunction/ai-openai';
LoadOpenAILLM(); 

dotenv.config();

const providerName = process.env.AI_PROVIDER || 'OpenAILLM';
const apiKey = process.env.AI_API_KEY;

const llm = MJGlobal.Instance.ClassFactory.CreateInstance<BaseLLM>(
  BaseLLM, 
  providerName,
  apiKey
);

Direct Provider Usage

When necessary, you can directly use a specific AI provider:

import { OpenAILLM } from '@memberjunction/ai-openai';

// Note with direct use of the OpenAILLM class no need for the LoadOpenAILLM call to prevent tree shaking since there is a static code path to the class
// Create an instance with your API key
const llm = new OpenAILLM('your-openai-api-key');

// Use the provider-specific implementation
const result = await llm.ChatCompletion({
  model: 'gpt-4',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is AI abstraction?' }
  ]
});

console.log(result.data.choices[0].message.content);

Core Abstractions

Base Models

BaseModel

The foundational abstract class for all AI models. Provides:

  • Protected API key management
  • Base parameter and result types
  • Model usage tracking

BaseLLM

Abstract class for text generation models. Features:

  • Standard and streaming chat completions
  • Parallel chat completions with callbacks
  • Text summarization
  • Text classification
  • Additional provider-specific settings management
  • Response format control (Any, Text, Markdown, JSON, ModelSpecific)
  • Support for reasoning budget tokens (for reasoning models)

BaseEmbeddings

Abstract class for text embedding models. Provides:

  • Single text embedding generation
  • Batch text embedding generation
  • Model listing capabilities
  • Additional settings management

BaseAudioGenerator

Abstract class for audio processing models. Supports:

  • Text-to-speech generation
  • Speech-to-text transcription
  • Voice listing and management
  • Model and pronunciation dictionary queries
  • Configurable voice settings (stability, similarity, speed, etc.)

BaseVideoGenerator

Abstract class for video generation models. Enables:

  • Avatar-based video creation
  • Video translation capabilities
  • Avatar management and listing

BaseDiffusion

Abstract class for image generation models (placeholder for future implementation)

LLM Operations

Standard Chat Completion

For interactive conversations with AI models:

import { ChatParams, ChatResult, ChatMessage } from '@memberjunction/ai';

const params: ChatParams = {
  model: 'your-model-name',
  messages: [
    { role: 'system', content: 'System instruction' },
    { role: 'user', content: 'User message' },
    { role: 'assistant', content: 'Assistant response' }
  ],
  temperature: 0.7,
  maxOutputTokens: 1000
};

const result: ChatResult = await llm.ChatCompletion(params);

Streaming Chat Completion

For real-time streaming of responses:

import { ChatParams, ChatResult, ChatMessage, StreamingChatCallbacks } from '@memberjunction/ai';

// Define the streaming callbacks
const callbacks: StreamingChatCallbacks = {
  // Called when a new chunk arrives
  OnContent: (chunk: string, isComplete: boolean) => {
    if (isComplete) {
      console.log("\nStream completed!");
    } else {
      // Print chunks as they arrive (or add to UI)
      process.stdout.write(chunk);
    }
  },
  
  // Called when the complete response is available
  OnComplete: (finalResponse: ChatResult) => {
    console.log("\nFull response:", finalResponse.data.choices[0].message.content);
    console.log("Total tokens:", finalResponse.data.usage.totalTokens);
  },
  
  // Called if an error occurs during streaming
  OnError: (error: any) => {
    console.error("Streaming error:", error);
  }
};

// Create streaming chat parameters
const params: ChatParams = {
  model: 'gpt-4',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Write a short poem about AI, one line at a time.' }
  ],
  streaming: true, // Enable streaming
  streamingCallbacks: callbacks
};

// The ChatCompletion API remains the same, but will stream results
await llm.ChatCompletion(params);

Cancellable Operations with AbortSignal

The MemberJunction AI Core package supports cancellation of long-running operations using the standard JavaScript AbortSignal pattern. This provides a clean, standardized way to cancel AI operations when needed.

Understanding the AbortSignal Pattern

The AbortSignal pattern uses a separation of concerns approach:

  • Controller (Caller): Creates the AbortController and decides when to cancel
  • Worker (AI Operations): Receives the AbortSignal token and handles how to cancel

Basic Cancellation Example

import { ChatParams, BaseLLM } from '@memberjunction/ai';

async function cancellableAIChat() {
  // Create the cancellation controller (the "boss")
  const controller = new AbortController();
  
  // Set up automatic timeout cancellation
  const timeout = setTimeout(() => {
    controller.abort(); // Cancel after 30 seconds
  }, 30000);

  try {
    const params: ChatParams = {
      model: 'gpt-4',
      messages: [
        { role: 'user', content: 'Write a very long story...' }
      ],
      cancellationToken: controller.signal // Pass the signal token
    };

    const result = await llm.ChatCompletion(params);
    clearTimeout(timeout); // Clear timeout if completed successfully
    return result;
    
  } catch (error) {
    if (error.message.includes('cancelled')) {
      console.log('Operation was cancelled');
    } else {
      console.error('Operation failed:', error);
    }
  }
}

User-Initiated Cancellation

Perfect for UI applications where users can cancel operations:

class AIInterface {
  private currentController: AbortController | null = null;

  async startAIConversation() {
    // Create new controller for this conversation
    this.currentController = new AbortController();
    
    try {
      const result = await llm.ChatCompletion({
        model: 'gpt-4',
        messages: [{ role: 'user', content: 'Generate a detailed report...' }],
        cancellationToken: this.currentController.signal
      });
      
      console.log('AI Response:', result.data.choices[0].message.content);
    } catch (error) {
      if (error.message.includes('cancelled')) {
        console.log('User cancelled the conversation');
      }
    } finally {
      this.currentController = null;
    }
  }

  // Called when user clicks "Cancel" button
  cancelConversation() {
    if (this.currentController) {
      this.currentController.abort(); // Instant cancellation
    }
  }
}

Multiple Cancellation Sources

One signal can be cancelled from multiple sources:

async function smartAIExecution() {
  const controller = new AbortController();
  const signal = controller.signal;

  // 1. User cancel button
  document.getElementById('cancel')?.addEventListener('click', () => {
    controller.abort(); // User cancellation
  });

  // 2. Resource limit cancellation  
  if (await checkMemoryUsage() > MAX_MEMORY) {
    controller.abort(); // Resource limit cancellation
  }

  // 3. Window unload cancellation
  window.addEventListener('beforeunload', () => {
    controller.abort(); // Page closing cancellation
  });

  // 4. Timeout cancellation
  setTimeout(() => controller.abort(), 60000); // 1 minute timeout

  try {
    const result = await llm.ChatCompletion({
      model: 'gpt-4',
      messages: [{ role: 'user', content: 'Complex analysis task...' }],
      cancellationToken: signal // One token, many cancel sources!
    });
    
    return result;
  } catch (error) {
    // The AI operation doesn't know WHY it was cancelled - just that it should stop
    console.log('AI operation was cancelled:', error.message);
  }
}

How It Works Internally

The AI Core package implements cancellation at multiple levels:

  1. BaseLLM Level: Checks cancellation token before and during operations
  2. Provider Level: Native cancellation support where available (e.g., fetch API)
  3. Fallback Pattern: Promise.race for providers without native cancellation
// Internal implementation example (simplified)
async ChatCompletion(params: ChatParams): Promise<ChatResult> {
  // Check if already cancelled before starting
  if (params.cancellationToken?.aborted) {
    throw new Error('Operation was cancelled');
  }

  // For providers with native cancellation support
  if (this.hasNativeCancellation) {
    return await this.callProviderAPI({
      ...params,
      signal: params.cancellationToken // Native fetch cancellation
    });
  }

  // Fallback: Promise.race pattern for other providers
  const promises = [
    this.callProviderAPI(params) // The actual AI call
  ];

  if (params.cancellationToken) {
    promises.push(
      new Promise<never>((_, reject) => {
        params.cancellationToken!.addEventListener('abort', () => {
          reject(new Error('Operation was cancelled'));
        });
      })
    );
  }

  return await Promise.race(promises);
}

Key Benefits

  1. ๐ŸŽฏ Responsive UX: Users can cancel long-running AI operations instantly
  2. ๐Ÿ’พ Resource Management: Prevent runaway operations from consuming resources
  3. ๐Ÿ”„ Composable: Easy to combine user actions, timeouts, and resource limits
  4. ๐Ÿ“ฑ Standard API: Uses native JavaScript AbortSignal - no custom patterns
  5. ๐Ÿงน Clean Cleanup: Automatic resource cleanup when operations are cancelled

The cancellation token pattern provides a robust, standardized way to make AI operations responsive and resource-efficient!

Text Summarization

For summarizing longer text content:

import { SummarizeParams, SummarizeResult } from '@memberjunction/ai';

const params: SummarizeParams = {
  text: 'Long text to summarize...',
  model: 'your-model-name',
  maxWords: 100
};

const result: SummarizeResult = await llm.SummarizeText(params);
console.log(result.summary);

Text Classification

For categorizing text into predefined classes:

import { ClassifyParams, ClassifyResult } from '@memberjunction/ai';

const params: ClassifyParams = {
  text: 'Text to classify',
  model: 'your-model-name',
  classes: ['Category1', 'Category2', 'Category3']
};

const result: ClassifyResult = await llm.ClassifyText(params);
console.log(result.classification);

Response Format Control

Control the format of AI responses:

const params: ChatParams = {
  // ...other parameters
  responseFormat: 'JSON',  // 'Any', 'Text', 'Markdown', 'JSON', or 'ModelSpecific'
};

// For provider-specific response formats
const customFormatParams: ChatParams = {
  // ...other parameters
  responseFormat: 'ModelSpecific',
  modelSpecificResponseFormat: {
    // Provider-specific format options
  }
};

Error Handling

All operations return a standardized result format with error information:

const result = await llm.ChatCompletion(params);

if (!result.success) {
  console.error('Error:', result.errorMessage);
  console.error('Status Text:', result.statusText);
  console.error('Exception:', result.exception);
  console.error('Time Elapsed:', result.timeElapsed, 'ms');
} else {
  console.log('Success! Response time:', result.timeElapsed, 'ms');
}

Token Usage Tracking

Monitor token usage consistently across different providers:

const result = await llm.ChatCompletion(params);
console.log('Prompt Tokens:', result.data.usage.promptTokens);
console.log('Completion Tokens:', result.data.usage.completionTokens);
console.log('Total Tokens:', result.data.usage.totalTokens);

Available Providers

The following provider packages implement the MemberJunction AI abstractions:

Note: Each provider implements the features they support. See individual provider documentation for specific capabilities.

Implementation Details

Streaming Architecture

The BaseLLM class uses a template method pattern for handling streaming:

  1. The main ChatCompletion method checks if streaming is requested and supported
  2. If streaming is enabled, it calls the template method handleStreamingChatCompletion
  3. Provider implementations supply three key methods:
    • createStreamingRequest: Creates the provider-specific streaming request
    • processStreamingChunk: Processes individual chunks from the stream
    • finalizeStreamingResponse: Creates the final response object

This architecture allows for a clean separation between common streaming logic and provider-specific implementations.

Dependencies

  • @memberjunction/global - MemberJunction global utilities including class factory
  • rxjs - Reactive extensions for JavaScript

API Reference

Result Types

BaseResult

All operations return results extending BaseResult:

class BaseResult {
    success: boolean;
    startTime: Date;
    endTime: Date;
    errorMessage: string;
    exception: any;
    timeElapsed: number; // Computed getter
}

ChatResult

Extends BaseResult with chat-specific data:

class ChatResult extends BaseResult {
    data: {
        choices: Array<{
            message: ChatCompletionMessage;
            index: number;
            finishReason?: string;
        }>;
        usage: ModelUsage;
    };
    statusText?: string;
}

Chat Message Types

ChatMessage

Supports multi-modal content:

type ChatMessage = {
    role: 'system' | 'user' | 'assistant';
    content: string | ChatMessageContentBlock[];
}

type ChatMessageContentBlock = {
    type: 'text' | 'image_url' | 'video_url' | 'audio_url' | 'file_url';
    content: string; // URL or base64 encoded content
}

Streaming Callbacks

interface StreamingChatCallbacks {
    OnContent?: (chunk: string, isComplete: boolean) => void;
    OnComplete?: (finalResponse: ChatResult) => void;
    OnError?: (error: any) => void;
}

interface ParallelChatCompletionsCallbacks {
    OnCompletion?: (response: ChatResult, index: number) => void;
    OnError?: (error: any, index: number) => void;
    OnAllCompleted?: (responses: ChatResult[]) => void;
}

Configuration

API Key Management

The package includes a flexible API key management system through the AIAPIKeys class:

import { GetAIAPIKey } from '@memberjunction/ai';

// Get API key for a specific provider
const apiKey = GetAIAPIKey('OpenAI');

By default, it looks for environment variables with the pattern: AI_VENDOR_API_KEY__[PROVIDER_NAME]

Example:

AI_VENDOR_API_KEY__OPENAI=your-api-key
AI_VENDOR_API_KEY__ANTHROPIC=your-api-key
AI_VENDOR_API_KEY__MISTRAL=your-api-key

You can extend the AIAPIKeys class to implement custom API key retrieval logic.

Provider-Specific Settings

Many providers support additional configuration beyond the API key:

const llm = new SomeLLM('api-key');
llm.SetAdditionalSettings({
    baseURL: 'https://custom-endpoint.com',
    organization: 'my-org',
    // Provider-specific settings
});

Integration with MemberJunction

While this package can be used standalone, it integrates seamlessly with the MemberJunction framework:

  • Uses @memberjunction/global for class factory pattern and registration
  • Compatible with MemberJunction's metadata system
  • Can leverage MemberJunction's configuration management when available

Dependencies

  • @memberjunction/global (^2.43.0) - MemberJunction global utilities including class factory
  • rxjs (^7.8.1) - Reactive extensions for JavaScript (used in streaming implementations)
  • dotenv (^16.4.1) - Environment variable management
  • typeorm (^0.3.20) - ORM functionality (optional, only if using with full MemberJunction)

Development

Building

cd packages/AI/Core
npm run build

TypeScript Configuration

The package is configured with TypeScript strict mode and targets ES2022. See tsconfig.json for full compiler options.

Best Practices

  1. Always use the class factory pattern for maximum flexibility
  2. Handle errors gracefully - check result.success before accessing data
  3. Monitor token usage to manage costs and stay within limits
  4. Use streaming for long responses to improve user experience
  5. Leverage parallel completions for comparison or reliability
  6. Cache API keys using the built-in caching mechanism
  7. Specify response formats when you need structured output

Troubleshooting

Common Issues

  1. "API key cannot be empty" error

    • Ensure you're passing a valid API key to the constructor
    • Check environment variables are properly set
  2. Provider not found when using class factory

    • Make sure to import and call the provider's Load function
    • Verify the provider class name matches exactly
  3. Streaming not working

    • Check if the provider supports streaming (llm.SupportsStreaming)
    • Ensure streaming callbacks are properly defined
  4. Type errors with content blocks

    • Use the provided type guards and interfaces
    • Ensure content format matches the expected structure

License

See the repository root for license information.

2.23.2

8 months ago

2.46.0

5 months ago

2.23.1

8 months ago

2.34.0

6 months ago

2.19.4

9 months ago

2.19.5

9 months ago

2.19.2

9 months ago

2.19.3

9 months ago

2.19.0

9 months ago

2.19.1

9 months ago

2.34.2

6 months ago

2.34.1

6 months ago

2.45.0

5 months ago

2.22.1

9 months ago

2.22.0

9 months ago

2.22.2

9 months ago

2.33.0

6 months ago

2.18.3

9 months ago

2.18.1

9 months ago

2.18.2

9 months ago

2.18.0

9 months ago

2.21.0

9 months ago

2.44.0

5 months ago

2.29.0

8 months ago

2.29.2

8 months ago

2.29.1

8 months ago

2.32.0

7 months ago

2.32.2

7 months ago

2.32.1

7 months ago

2.17.0

9 months ago

2.43.0

5 months ago

2.20.2

9 months ago

2.20.3

9 months ago

2.20.0

9 months ago

2.20.1

9 months ago

2.28.0

8 months ago

2.31.0

7 months ago

2.39.0

5 months ago

2.16.1

9 months ago

2.16.0

9 months ago

2.42.1

5 months ago

2.42.0

5 months ago

2.27.1

8 months ago

2.27.0

8 months ago

2.30.0

7 months ago

2.15.2

9 months ago

2.15.0

9 months ago

2.15.1

9 months ago

2.38.0

5 months ago

2.41.0

5 months ago

2.26.1

8 months ago

2.26.0

8 months ago

2.37.1

6 months ago

2.37.0

6 months ago

2.14.0

10 months ago

2.40.0

5 months ago

2.25.0

8 months ago

2.48.0

4 months ago

2.13.4

10 months ago

2.36.0

6 months ago

2.13.2

11 months ago

2.13.3

10 months ago

2.13.0

11 months ago

2.36.1

6 months ago

2.13.1

11 months ago

2.47.0

4 months ago

2.24.1

8 months ago

2.24.0

8 months ago

2.12.0

12 months ago

2.35.1

6 months ago

2.35.0

6 months ago

2.23.0

8 months ago

2.11.0

12 months ago

2.10.0

12 months ago

2.9.0

1 year ago

2.8.0

1 year ago

2.7.0

1 year ago

2.7.1

1 year ago

2.6.1

1 year ago

2.6.0

1 year ago

2.5.2

1 year ago

1.6.1

1 year ago

1.6.0

1 year ago

2.4.1

1 year ago

2.4.0

1 year ago

1.5.3

1 year ago

1.5.2

1 year ago

1.5.1

1 year ago

1.5.0

1 year ago

2.3.0

1 year ago

2.3.2

1 year ago

2.3.1

1 year ago

2.3.3

1 year ago

1.4.1

1 year ago

1.4.0

1 year ago

2.2.1

1 year ago

2.2.0

1 year ago

1.3.3

1 year ago

1.3.2

1 year ago

1.3.1

1 year ago

1.3.0

1 year ago

2.1.2

1 year ago

2.1.1

1 year ago

2.1.4

1 year ago

2.1.3

1 year ago

2.1.5

1 year ago

2.1.0

1 year ago

2.0.0

1 year ago

1.8.1

1 year ago

1.8.0

1 year ago

1.7.1

1 year ago

1.7.0

1 year ago

2.5.0

1 year ago

2.5.1

1 year ago

1.2.2

1 year ago

1.2.1

1 year ago

1.2.0

1 year ago

1.1.1

1 year ago

1.1.0

1 year ago

1.1.3

1 year ago

1.1.2

1 year ago

1.0.11

1 year ago

1.0.9

2 years ago

1.0.8

2 years ago

1.0.7

2 years ago

1.0.8-next.6

2 years ago

1.0.8-next.5

2 years ago

1.0.8-next.4

2 years ago

1.0.8-next.3

2 years ago

1.0.8-next.2

2 years ago

1.0.8-next.1

2 years ago

1.0.8-next.0

2 years ago

1.0.7-next.0

2 years ago

1.0.8-beta.0

2 years ago

1.0.2

2 years ago

1.0.6

2 years ago

1.0.4

2 years ago

1.0.3

2 years ago

1.0.1

2 years ago

0.9.171

2 years ago

0.9.170

2 years ago

0.9.169

2 years ago

0.9.168

2 years ago

0.9.164

2 years ago

0.9.167

2 years ago

0.9.166

2 years ago

0.9.163

2 years ago

0.9.162

2 years ago

0.9.165

2 years ago

0.9.160

2 years ago

0.9.159

2 years ago

0.9.156

2 years ago

0.9.154

2 years ago

0.9.155

2 years ago

0.9.153

2 years ago

0.9.152

2 years ago

0.9.146

2 years ago

0.9.148

2 years ago

0.9.143

2 years ago

0.9.145

2 years ago

0.9.144

2 years ago

0.9.142

2 years ago

0.9.141

2 years ago

0.9.140

2 years ago

0.9.138

2 years ago

0.9.137

2 years ago

0.9.136

2 years ago

0.9.127

2 years ago

0.9.120

2 years ago

0.9.117

2 years ago

0.9.119

2 years ago

0.9.116

2 years ago

0.9.106

2 years ago

0.9.105

2 years ago

0.9.104

2 years ago

0.9.101

2 years ago

0.9.100

2 years ago

0.9.96

2 years ago

0.9.97

2 years ago

0.9.98

2 years ago

0.9.99

2 years ago

0.9.93

2 years ago

0.9.94

2 years ago

0.9.95

2 years ago

0.9.103

2 years ago

0.9.102

2 years ago

0.9.90

2 years ago

0.9.89

2 years ago

0.9.87

2 years ago

0.9.88

2 years ago

0.9.85

2 years ago

0.9.86

2 years ago

0.9.81

2 years ago

0.9.82

2 years ago

0.9.83

2 years ago

0.9.84

2 years ago

0.9.80

2 years ago

0.9.79

2 years ago

0.9.78

2 years ago

0.9.77

2 years ago

0.9.74

2 years ago

0.9.75

2 years ago

0.9.76

2 years ago

0.9.71

2 years ago

0.9.72

2 years ago

0.9.73

2 years ago

0.9.70

2 years ago

0.9.69

2 years ago

0.9.67

2 years ago

0.9.68

2 years ago

0.9.65

2 years ago

0.9.66

2 years ago

0.9.64

2 years ago

0.9.63

2 years ago

0.9.62

2 years ago

0.9.61

2 years ago

0.9.60

2 years ago

0.9.59

2 years ago

0.9.58

2 years ago

0.9.57

2 years ago

0.9.56

2 years ago

0.9.55

2 years ago

0.9.54

2 years ago

0.9.53

2 years ago

0.9.52

2 years ago

0.9.51

2 years ago

0.9.50

2 years ago

0.9.49

2 years ago

0.9.48

2 years ago

0.9.47

2 years ago

0.9.46

2 years ago

0.9.45

2 years ago

0.9.44

2 years ago

0.9.42

2 years ago

0.9.41

2 years ago

0.9.40

2 years ago

0.9.39

2 years ago

0.9.38

2 years ago

0.9.37

2 years ago

0.9.36

2 years ago

0.9.35

2 years ago

0.9.34

2 years ago

0.9.33

2 years ago

0.9.32

2 years ago

0.9.31

2 years ago

0.9.30

2 years ago

0.9.29

2 years ago

0.9.28

2 years ago

0.9.27

2 years ago

0.9.26

2 years ago

0.9.25

2 years ago

0.9.24

2 years ago

0.9.23

2 years ago

0.9.22

2 years ago

0.9.21

2 years ago

0.9.20

2 years ago

0.9.19

2 years ago

0.9.18

2 years ago

0.9.17

2 years ago

0.9.16

2 years ago

0.9.15

2 years ago

0.9.14

2 years ago

0.9.13

2 years ago

0.9.12

2 years ago

0.9.11

2 years ago

0.9.10

2 years ago

0.9.9

2 years ago

0.9.8

2 years ago

0.9.7

2 years ago

0.9.6

2 years ago

0.9.5

2 years ago

0.9.4

2 years ago

0.9.3

2 years ago

0.9.2

2 years ago

0.9.1

2 years ago

1.0.0

2 years ago