0.1.13 • Published 9 months ago

@ai-toolkit/cerebras v0.1.13

Weekly downloads
-
License
Apache-2.0
Repository
github
Last release
9 months ago

AI TOOLKIT - Cerebras Provider

The Cerebras provider for the AI TOOLKIT contains language model support for Cerebras, offering high-speed AI model inference powered by Cerebras Wafer-Scale Engines and CS-3 systems.

Setup

The Cerebras provider is available in the @ai-toolkit/cerebras module. You can install it with

npm i @ai-toolkit/cerebras

Provider Instance

You can import the default provider instance cerebras from @ai-toolkit/cerebras:

import { cerebras } from '@ai-toolkit/cerebras';

Available Models

Currently, Cerebras offers two models:

Llama 3.1 8B

  • Model ID: llama3.1-8b
  • 8 billion parameters
  • Knowledge cutoff: March 2023
  • Context Length: 8192
  • Training Tokens: 15 trillion+

Llama 3.3 70B

  • Model ID: llama-3.3-70b
  • 70 billion parameters
  • Knowledge cutoff: December 2023
  • Context Length: 8192
  • Training Tokens: 15 trillion+

Example

import { cerebras } from '@ai-toolkit/cerebras';
import { generateText } from 'ai-toolkit';

const { text } = await generateText({
  model: cerebras('llama3.1-8b'),
  prompt: 'Write a JavaScript function that sorts a list:',
});

Documentation

For more information about Cerebras' high-speed inference capabilities and API documentation, please visit:

Note: Due to high demand in the early launch phase, context windows are temporarily limited to 8192 tokens in the Free Tier.