0.7.5 • Published 11 months ago

@rumbl/laserbeak v0.7.5

Weekly downloads
-
License
-
Repository
github
Last release
11 months ago

Laserbeak enables developers to run transformer models in the browser/Electron using WebGPU.

It is designed to efficiently manage models by caching them in IndexedDB and sharing weights between encoder-decoder models for optimal performance.

To see what it can do, check out our:

🌟 Features

  • Run transformer models in the browser using WebGPU
  • Built on top of a custom Rust runtime for performance
  • Efficient model management with caching and weight sharing
  • Easy-to-use API for loading and running models

⚡️ Quick start

Install Laserbeak using npm:

npm i laserbeak

📚 Usage

Here's a simple example of how to load and run a model using Laserbeak:

import {
    SessionManager,
    AvailableModels,
    InferenceSession,
} from "@rumbl/laserbeak";

//Create a SessionManager instance
let manager = new SessionManager();

//Load a model with a callback for when it's loaded
let modelSession = await manager.loadModel(AvailableModels.FLAN_T5_BASE, () =>
    console.log("Loaded successfully!")
);

// Run the model with a prompt and handle the output
await session.run(prompt, (output: string) => {
    // Process the model output
    console.log(output);
});

🚀 Roadmap

Laserbeak is still a pre_pre_alpha project, here's our roadmap:

  • F16 support
  • Shader optimizations
  • Expanded model support (Whisper, UNet)
  • Unannounced features 🤫
  • INT8, INT4 support

Stay tuned for exciting updates!

💪 Contributing

We welcome contributions to Laserbeak!

0.7.5

11 months ago

0.7.2

1 year ago

0.7.1

1 year ago

0.7.4

12 months ago

0.7.3

1 year ago

0.7.0

1 year ago

0.6.1

1 year ago

0.6.0

1 year ago

0.5.3

1 year ago

0.5.2

1 year ago

0.5.1

1 year ago

0.5.0

1 year ago

0.4.7

1 year ago

0.4.6

1 year ago

0.4.5

1 year ago

0.4.4

1 year ago

0.4.3

1 year ago

0.4.1

1 year ago

0.4.0

1 year ago

0.3.0

1 year ago

0.2.0

1 year ago

0.1.0

1 year ago