0.0.13 • Published 2 months ago

@cjpais/inference v0.0.13

Weekly downloads
-
License
-
Repository
-
Last release
2 months ago

inference

Trying to wrap a bunch of different inference providers models and rate limit them. As well as getting them to support typescript more natively.

My specific application may send many parallel requests to inference models and I need to rate limit these requests across the application per provider. This effectively solves that problem

This is a major WIP so a bunch of things are left unimplmented for the time being. However the basic functionality should be there

Supported providers:

  • OpenAI (for chat, audio, image, embedding)
  • Together (for chat)
  • Mistral (for chat)
  • Whisper.cpp (for audio)

WIP Stuff:

  • consistent JSON mode
  • error handling
  • more rate limiting options
  • more providers (llama.cpp for chat, image and embedding)
  • move to config file & code gen for better typing?

Usage

Check out test/index.test.ts for usage examples.

Generally speaking

  1. Instantiate a provider
const oai = new OpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY!,
});
  1. Create a rate limiter based on your own usage (this is in requests per second)
const oaiLimiter = createRateLimiter(2);
  1. Define what models you want to use and their alias
const CHAT_MODELS: Record<string, ChatModel> = {
  "gpt-3.5": {
    provider: oai,
    name: "gpt-3.5",
    providerModel: "gpt-3.5-turbo-0125",
    rateLimiter: oaiLimiter,
  },
  "gpt-4": {
    provider: oai,
    name: "gpt-4",
    providerModel: "gpt-4-0125-preview",
    rateLimiter: oaiLimiter,
  }
}
  1. Create inference with the models you want
const inference = new Inference({chatModels: CHAT_MODELS});
  1. Call the inference with the model you want to use
const result = await inference.chat({model: "gpt-3.5", prompt: "Hello, world!"});

To install dependencies:

bun install

To run:

bun run index.ts
0.0.13

2 months ago

0.0.11

2 months ago

0.0.12

2 months ago

0.0.9

2 months ago

0.0.8

2 months ago

0.0.7

2 months ago

0.0.6

2 months ago

0.0.5

3 months ago

0.0.4

3 months ago

0.0.3

3 months ago

0.0.2

3 months ago

0.0.1

3 months ago