1.0.7 • Published 1 year ago

streamed-chatgpt-api v1.0.7

Weekly downloads
-
License
MIT
Repository
github
Last release
1 year ago

Streamed ChatGPT API With Node

A Node.js module for streaming ChatGPT API responses using the OpenAI API. Streamed ChatGPT API allows users to fetch AI-generated responses in real-time. This module was created due to issues with other modules that were causing freezing, which provided a poor user experience. Various timeouts have been implemented to automatically retry and throw errors in case of disconnections.

Introduction

ChatGPT is an advanced AI language model developed by OpenAI. This module enables you to interact with the ChatGPT API, allowing you to send messages and receive AI-generated responses in real-time. The OpenAI API provides access to various models, including the gpt-3.5-turbo model, which is used by default in this module.

Usage Example

A simple node web app showing usage of the module with streamed chat can be found here: Streamed ChatGPT API Usage Example

Installation

Install using npm:

npm install streamed-chatgpt-api

Usage

To use the module, first import it:

const { fetchStreamedChat, fetchStreamedChatContent } = require('streamed-chatgpt-api');

Then call the fetchStreamedChat function with your options and a callback function to process the streamed response. This is the simplest example, you can pass in an OpenAI API key and a single string as a prompt:

const apiKey = 'your_api_key_here';

fetchStreamedChat({
    apiKey,
    messageInput: 'Hello, how are you?',
}, (responseChunk) => {
    // get the actual content from the JSON
    const content = JSON.parse(responseChunk).choices[0].delta.content;
    if (content) {
        process.stdout.write(content);
    }
});

You can also pass in an an array as shown in the OpenAI API documentation, where you can define the System prompt, and on going conversation. Here's a simple example:

const apiKey = 'your_api_key_here';

const messages = [{ role: 'system', content: '' }, { role: 'user', content: 'capital of canada' }];

fetchStreamedChat({
    apiKey,
    messageInput: messages,
    fetchTimeout: 10000,
    }, (responseChunk) => {
    // get the actual content from the JSON
    const content = JSON.parse(responseChunk).choices[0].delta.content;
    if (content) {
        process.stdout.write(content);
    }
});

Using fetchStreamedChatContent

The fetchStreamedChatContent function is a higher-level function that simplifies the process of fetching the generated content by not requiring you to deal with individual chunks. It takes the same options as fetchStreamedChat but also accepts three optional callback functions, onResponse, onFinish, and onError.

const apiKey = 'your_api_key_here';

fetchStreamedChatContent({
    apiKey,
    messageInput: 'Hello, how are you?',
}, (content) => {
    // onResponse
    process.stdout.write(content);
}, () => {
    // onFinish
    console.log('Chat completed');
}, (error) => {
    // onError
    console.error('Error:', error);
});

You can specify the same options as specified in OpenAI's chat completion reference.

Here is a table of the parameters below. Only the API key and messageInput (prompt) are required. By default the gpt-3.5-turbo is used.

Options

The following options are available for the fetchStreamedChat and fetchStreamedChatContent functions:

OptionTypeDefaultDescription
apiKeystring-Your OpenAI API key.
messageInputstring or array of objects-The input message or messages to generate a chat response for.
apiUrlstring"https://api.openai.com/v1/chat/completions"The OpenAI API URL to use.
modelstring"gpt-3.5-turbo"The OpenAI model to use.
temperaturenumber-The softmax temperature to use.
topPnumber-The top_p value to use.
nnumber-The number of responses to generate.
stopstring or array of strings-The sequence or sequences to stop generation at.
maxTokensnumber-The maximum number of tokens to generate.
presencePenaltynumber-The presence penalty value to use.
frequencyPenaltynumber-The frequency penalty value to use.
logitBiasobject-The logit bias object to use.
userstring-The user ID to use for the chat session.
retryCountnumber3The number of times to retry if the chat response fetch fails.
fetchTimeoutnumber20000The timeout value for the fetch request.
readTimeoutnumber10000The timeout value for reading the response stream.
retryIntervalnumber2000The interval between retries.
totalTimenumber300000The total time to allow for the chat response fetch.

License

MIT

Author

Created by Johann Dowa

1.0.7

1 year ago

1.0.6

1 year ago

1.0.5

1 year ago

1.0.4

1 year ago

1.0.3

1 year ago

1.0.2

1 year ago

1.0.1

1 year ago

1.0.0

1 year ago