0.1.11 • Published 10 months ago

tokencost v0.1.11

Weekly downloads
-
License
ISC
Repository
github
Last release
10 months ago

TokenCost for Nodejs

TokenCost helps calculate the USD cost of using major Large Language Model (LLMs) APIs by calculating the estimated cost of prompts and completions.

API forked from TokenCost for Python with code adapted for Nodejs apps. Originally written for StartKit.

Improvements

We've added some improvements over the python version:

  • Added support for calculating cost of sending Images with Chat Prompts
  • Added Image generation support
  • Added Audio/Whisper support

LLM Costs

Prices are forked from from LiteLLM's cost dictionary.

Installation

npm i tokencost

Usage

You can calculate the cost of prompts and completions from OpenAI requests:

import { calculatePromptCost, calculateCompletionCost } from "tokencost";

const messages = [
  { role: "system", content: "You are a helpful assistant." },
  {
    role: "user",
    content: "What is the prime directive?\n",
  },
];
const promptCost = calculatePromptCost(messages, "gpt-4o");

const completion = await openai.chat.completions.create({
  messages,
  model: "gpt-4o",
});

const { content } = completion.choices[0].message;
const completionCost = calculateCompletionCost(content, "gpt-4o");

console.log(`Total cost:`, promptCost + completionCost);

Note: If you're including images as part of a chat completion then they will also be included in the cost calculation!

Images

You can also calculate the cost of generating images:

import { calculateImageGenerationCost } from "tokencost";

const imageOptions = { size: "1024x1024", quality: "standard" };
const cost = calculateImageGenerationCost(imageOptions, "dall-e-3");

And of identifying images with Vision:

import { calculateImageDetectionCost } from "tokencost";

const image = readFileSync("image.jpg");
const cost = calculateImageDetectionCost(image, "gpt-4o");

Audio

const openai = new OpenAI({ apiKey });

const transcription = await openai.audio.transcriptions.create({
  file: fs.createReadStream("audio.mp3"),
  model: "whisper-1",
});

const { duration } = transcription;
const cost = calculateCompletionCost({ duration }, "whisper-1");

Models

Fetch a list of all currently tracked models (updates from LiteLLM's cost dictionary).

import { update, models, getModel, getModels } from "tokencost";

// the last fetched model list (updated when the module is installed)
console.log(models);
// [
//   "gpt-4": {
//     "max_tokens": 4096,
//     "max_input_tokens": 8192,
//     "max_output_tokens": 4096,
//     "input_cost_per_token": 0.00003,
//     "output_cost_per_token": 0.00006,
//     "mode": "chat",
//     "supports_function_calling": true,
//     "provider": "openai"
//   },
//   ...
// ]

// or get all the models of a specific type:
const chatModels = getModels("chat");

// or type and provider
const openAiImageModels = getModels("image-generation", { provider: "openai" });
// or you can grab a specific model by its key
const model = getModel("gpt-4o");

// fetching image models is a little more complicated as they are
// keyed on their quality and size
let imageModel = getModel("hd/1024-x-1792/dall-e-3");
// or
imageModel = getImageModel("dall-e-3", { quality: "hd", size: "1024x1792" });

// you can also manually update the model list (don't do it too often):
await update();

Contributing

Contributions to TokenCost are welcome! Feel free to create an issue for any bug reports, complaints, or feature suggestions.

License

TokenCost is released under the MIT License.

0.1.11

10 months ago

0.1.10

11 months ago

0.1.9

12 months ago

0.1.8

12 months ago

0.1.7

12 months ago

0.1.6

12 months ago

0.1.5

1 year ago

0.1.4

1 year ago

0.1.3

1 year ago

0.1.2

1 year ago

0.1.1

1 year ago