0.1.12 • Published 3 months ago

@usefulsensors/moonshine-js v0.1.12

Weekly downloads
-
License
MIT
Repository
github
Last release
3 months ago

Moonshine.js

Moonshine.js makes it easy for web developers to build modern, speech-driven web experiences without sacrificing user privacy. We build on three key principles:

  • Fast Transcription: simply connect a WebAudio-compliant media stream from any browser audio source and generate rapid transcriptions of speech.
  • Easy Voice Control: build feature-rich voice-controlled web apps in < 10 lines of code.
  • Local Processing: all audio processing happens locally in the user's web browser---no cloud services or privacy violations required.

Note: This package is currently in beta, and breaking changes may occur between versions. User feedback and developer contributions are welcome.

Installation

You can use Moonshine.js via CDN, or you can install it with npm. Simply import the package depending on your preferred method.

Via CDN

import * as Moonshine from 'https://cdn.jsdelivr.net/npm/@usefulsensors/moonshine-js@latest/dist/moonshine.min.js'

Via npm

Install the package first:

npm install @usefulsensors/moonshine-js

Then import:

import * as Moonshine from '@usefulsensors/moonshine-js'

Quickstart

Let's get started with a simple example. We'll create a transcriber to print speech from the microphone to the console. We can use the MicrophoneTranscriber for that. You can control the behavior of the transcriber by passing it a set of callbacks when you create it:

import * as Moonshine from 'https://cdn.jsdelivr.net/npm/@usefulsensors/moonshine-js@latest/dist/moonshine.min.js'

var transcriber = new Moonshine.MicrophoneTranscriber(
    "model/tiny", // the fastest and smallest Moonshine model
    {
        onTranscriptionUpdated(text) {
            console.log(text)
        }
    }
)

transcriber.start();

When we start the transcriber, the browser will request mic permissions and begin printing everything the user says to the console. It is useful in some cases to wait until the user has stopped speaking to transcribe their words. In this case, we'll enable voice activity detection (VAD) when we create the transcriber:

var transcriber = new Moonshine.MicrophoneTranscriber(
    "model/tiny",
    {
        onTranscriptionUpdated(text) {
            console.log(text)
        }
    },
    true // enable voice activity detection
)

transcriber.start();

Now the transcription will only update between pauses in speech.

That's all it takes to get started! Read the guides to learn how to transcribe audio from other sources, or to build voice-controlled applications.

0.1.10

3 months ago

0.1.11

3 months ago

0.1.12

3 months ago

0.1.9

3 months ago

0.1.8

3 months ago

0.1.7

4 months ago

0.1.6

4 months ago

0.1.2

4 months ago

0.1.4

4 months ago

0.1.3

4 months ago

0.1.5

4 months ago

0.1.1

5 months ago

0.1.0

5 months ago