0.5.1 • Published 6 months ago

@skadefro/litellm v0.5.1

Weekly downloads
-
License
ISC
Repository
github
Last release
6 months ago

Usage

npm install litellm
import { completion } from 'litellm';
process.env['OPENAI_API_KEY'] = 'your-openai-key';

const response = await completion({
  model: 'gpt-3.5-turbo',
  messages: [{ content: 'Hello, how are you?', role: 'user' }],
});

// or stream the results
const stream = await completion({
  model: "gpt-3.5-turbo",
  messages: [{ content: "Hello, how are you?", role: "user" }],
  stream: true
});

for await (const part of stream) {
  process.stdout.write(part.choices[0]?.delta?.content || "");
}

Features

We aim to support all features that LiteLLM python package supports.

  • Standardised completions
  • Standardised embeddings
  • Caching ❌
  • Proxy ❌

Supported Providers

ProviderCompletionStreaming
openai
cohere
anthropic
ollama
ai21
replicate
huggingface
together_ai
openrouter
vertex_ai
palm
baseten
azure
sagemaker
bedrock
vllm
nlp_cloud
aleph alpha
petals
deepinfra

Development

Clone the repo

git clone https://github.com/zya/litellmjs.git

Install dependencies

npm install

Run unit tests

npm t

Run E2E tests

First copy the example env file.

cp .example.env .env

Then fill the variables with your API keys to be able to run the E2E tests.

OPENAI_API_KEY=<Your OpenAI API key>
....

Then run the command below to run the tests

npm run test:e2e