0.7.1 • Published 1 month ago

@pr0gramm/fluester v0.7.1

Weekly downloads
-
License
MIT
Repository
github
Last release
1 month ago

fluester – ˈflʏstɐCI](https://github.com/pr0gramm-com/fluester/actions/workflows/CI.yml) CD version downloads License

Node.js bindings for OpenAI's Whisper. Hard-fork of whisper-node.

Features

  • Output transcripts to JSON (also .txt .srt .vtt)
  • Optimized for CPU (Including Apple Silicon ARM)
  • Timestamp precision to single word

Installation

Requirements

  • make and everything else listed as required to compile whisper.cpp
  • Node.js >= 20
  1. Add dependency to project
npm install @pr0gramm/fluester
  1. Download whisper model of choice
npx --package @pr0gramm/fluester download-model
  1. Compile whisper.cpp if you don't want to provide you own version:
npx --package @pr0gramm/fluester compile-whisper

Usage

Important: The API only supports WAV files (just like the original whisper.cpp). You need to convert any files to a supported format before. You can do this using ffmpeg (example taken from the whisper project):

ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav

OR Use the provided helper to convert the audio file:

import { convertFileToProcessableFile } from "@pr0gramm/fluester";

const inputFile = "input.mp3";
const outputFile = "output.wav";
await convertFileToProcessableFile(inputFile, outputFile);

Translation

import { createWhisperClient } from "@pr0gramm/fluester";

const client = createWhisperClient({
  modelName: "base",
});

const transcript = await client.translate("example/sample.wav");

console.log(transcript); // output: [ {start,end,speech} ]

Output (JSON)

[
  {
    "start": "00:00:14.310", // timestamp start
    "end": "00:00:16.480", // timestamp end
    "speech": "howdy" // transcription
  }
]

Language Detection

import { createWhisperClient } from "@pr0gramm/fluester";

const client = createWhisperClient({
  modelName: "base",
});

const result = await client.detectLanguage("example/sample.wav");
if(!result) {
  console.log(`Detected: ${result.language} with probability ${result.probability}`);
} else {
  console.log("Did not detect anything :(");
}

Tricks

This library is designed to work well in dockerized environments.

We took time and made some steps independent from each other, so they can be used in a multi-stage docker build.

FROM node:latest as dependencies
    WORKDIR /app
    COPY package.json package-lock.json ./
    RUN npm ci

    RUN npx --package @pr0gramm/fluester compile-whisper
    RUN npx --package @pr0gramm/fluester download-model tiny

FROM node:latest
    WORKDIR /app
    COPY --from=dependencies /app/node_modules /app/node_modules
    COPY ./ ./

This includes the model in the image. If you want to keep your image small, you can also download the model in your entrypoint using the commands above.

Made with

Roadmap

  • Nothing ¯\_(ツ)_/¯
0.6.3

1 month ago

0.5.4

1 month ago

0.7.1

1 month ago

0.6.2

1 month ago

0.7.0

1 month ago

0.6.1

1 month ago

0.6.0

1 month ago

0.5.3

3 months ago

0.5.2

3 months ago

0.5.1

4 months ago

0.5.0

5 months ago

0.4.3

5 months ago

0.4.2

5 months ago

0.4.1

5 months ago

0.3.17

5 months ago

0.3.15

7 months ago

0.3.14

7 months ago

0.3.13

7 months ago

0.3.12

7 months ago

0.3.10

7 months ago

0.3.9

7 months ago

0.3.8

7 months ago

0.3.7

7 months ago

0.3.6

7 months ago

0.3.5

7 months ago

0.3.4

7 months ago

0.3.3

7 months ago

0.3.2

7 months ago

0.3.1

7 months ago

0.3.0

7 months ago