1.1.5 • Published 1 year ago

openai-managed-chat-completion v1.1.5

Weekly downloads
-
License
UNLICENSED
Repository
github
Last release
1 year ago

OpenAI Managed Chat Completion

A TypeScript project that provides a fully managed class, with history (context from you previous prompt) is supported, for chatting with the OpenAI GPT model. It uses the OpenAI API to communicate with the GPT model and provides two modes of interaction: one-time completions and streaming completions. There aslo is a CLI interface that allows you to chat with the GPT model in the command line in demo.ts.

Installation from package manager

Yarn

$ yarn add openai-managed-chat-completion

NPM

$ npm install openai-managed-chat-completion

Installation from repository

  1. Clone the repository:
$ git clone https://github.com/LucasALLOIN/openai-managed-chat-completion.git
  1. Install dependencies:
$ cd openai-managed-chat-completion
$ yarn
  1. (only for demo cli) Set your OpenAI API key as an environment variable in .env:
OPENAI_API_KEY=<YOUR_API_KEY>

Developer Documentation

The main class in this project is GPTCompletion, which handles creating chat completions with the OpenAI API. It defines methods for getting completions both with and without streaming.

Constructor

constructor(openai: OpenAIApi, model = "gpt-3.5-turbo", options: GPTCompletionOptions = {})

Creates a new GPTCompletion instance.

  • openai: An instance of OpenAIApi from the openai package.
  • model: The name of the GPT model to use. Defaults to "gpt-3.5-turbo".

Methods

async getCompletion(prompt: string): Promise<ChatCompletionRequestMessage>

Gets a one-time completion from the GPT model.

  • prompt: The prompt to use for the completion.

Returns a Promise that resolves to the ChatCompletionRequestMessage openai response.

async getStreamCompletion(prompt: string): Promise<Readable>

Gets a streaming completion from the GPT model.

  • prompt: The prompt to use for the completion.

Returns a Promise that resolves to a Readable stream of completed data.

parseStreamCompletion(chunk: Buffer, onParsed?: (parsedData: ParsedData) => void, onFinished?: () => void, onError?: (error: any) => void)

Parses a streamed response from the OpenAI API.

  • chunk: The streamed response chunk to parse.
  • onParsed: A callback function to call when the parsed data is available.
  • onFinished: A callback function to call when the streaming response is finished.
  • onError: A callback function to call when an error occurs while parsing the streamed response.

Example Usage

import {OpenAIApi} from "openai";
import GPTCompletion from "./GPTCompletion";

const openai = new OpenAIApi(process.env.OPENAI_API_KEY);

const gpt = new GPTCompletion(openai);

const completion = gpt.getCompletion("Hello, GPT!");

completion.then(result => {
  console.log(result); // "Hi there!"
});

const stream = gpt.getStreamCompletion("Tell me about yourself.");

stream.on("data", (chunk: Buffer) => {
  gpt.parseStreamCompletion(chunk, (parsedData) => {
    console.log(parsedData);
  });
});

Dependencies

  • dotenv: A zero-dependency module that loads environment variables from a .env file.
  • openai: A client library for the OpenAI API.
  • nodemon: A tool for automatically restarting the server when changes are made.
  • ts-node: A tool that executes TypeScript files directly, without compilation.
  • typescript: A superset of JavaScript that adds typed variables and other features.
  • rollup: A module bundler for JavaScript.
  • rollup-plugin-node-resolve: A Rollup plugin that locates modules using the Node resolution algorithm.
  • rollup-plugin-typescript2: A Rollup plugin that uses the TypeScript compiler to transpile TypeScript files.
1.1.5

1 year ago

1.1.4

1 year ago

1.1.3

1 year ago

1.1.2

1 year ago

1.1.1

1 year ago

1.1.0

1 year ago

1.0.3

1 year ago

1.0.2

1 year ago

1.0.1

1 year ago

1.0.0

1 year ago