web-speech-cognitive-services v4.0.1-master.2143983
web-speech-cognitive-services
Web Speech API adapter to use Cognitive Services Speech Services for both speech-to-text and text-to-speech service.
This scaffold is provided by
react-component-template.
Description
Speech technologies enables a lot of interesting scenarios, including Intelligent Personal Assistant and provide alternative inputs for assistive technologies.
Although W3C standardized speech technologies in browser, speech-to-text and text-to-speech support are still scarce. However, cloud-based speech technologies are very mature.
This polyfill provides W3C Speech Recognition and Speech Synthesis API in browser by using Azure Cognitive Services Speech Services. This will bring speech technologies to all modern first-party browsers available on both PC and mobile platforms.
Demo
Before getting started, please obtain a Cognitive Services subscription key from your Azure subscription.
Try out our demo at https://compulim.github.io/web-speech-cognitive-services. If you don't have a subscription key, you can still try out our demo in a speech-supported browser.
We use react-dictate-button and react-say to quickly setup the playground.
Browser requirements
Speech recognition requires WebRTC API and the page must hosted thru HTTPS or localhost. Although iOS 12 support WebRTC, native apps using WKWebView do not support WebRTC.
Special requirement for Safari
Speech synthesis requires Web Audio API. For Safari, user gesture (click or tap) is required to play audio clips using Web Audio API. To ready the Web Audio API to use without user gesture, you can synthesize an empty string, which will not trigger any network calls but playing an empty hardcoded short audio clip. If you already have a "primed" AudioContext object, you can also pass it as an option.
How to use
There are two ways to use this package:
Using <script> to load the bundle
To use the ponyfill directly in HTML, you can use our published bundle from unpkg.
In the sample below, we use the bundle to perform text-to-speech with a voice named "JessaRUS".
<!DOCTYPE html>
<html lang="en-US">
  <head>
    <script src="https://unpkg.com/web-speech-cognitive-services/umd/web-speech-cognitive-services.production.min.js"></script>
  </head>
  <body>
    <script>
      const { speechSynthesis, SpeechSynthesisUtterance } = window.WebSpeechCognitiveServices.create({
        region: 'westus',
        subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
      });
      speechSynthesis.addEventListener('voiceschanged', () => {
        const voices = speechSynthesis.getVoices();
        const utterance = new SpeechSynthesisUtterance('Hello, World!');
        utterance.voice = voices.find(voice => /JessaRUS/u.test(voice.name));
        speechSynthesis.speak(utterance);
      });
    </script>
  </body>
</html>We do not host the bundle. You should always use Subresource Integrity to protect bundle integrity when loading from a third-party CDN.
The voiceschanged event come shortly after you created the ponyfill. You will need to wait until the event arrived before able to choose a voice for your utterance.
Install from NPM
For production build, run npm install web-speech-cognitive-services.
For development build, run npm install web-speech-cognitive-services@master.
Since Speech Services SDK is not on NPM yet, we will bundle the SDK inside this package for now. When Speech Services SDK release on NPM, we will define it as a peer dependency.
Polyfilling vs. ponyfilling
In JavaScript, polyfill is a technique to bring newer features to older environment. Ponyfill is very similar, but instead polluting the environment by default, we prefer to let the developer to choose what they want. This article talks about polyfill vs. ponyfill.
In this package, we prefer ponyfill because it do not pollute the hosting environment. You are also free to mix-and-match multiple speech recognition engines under a single environment.
Options
The following list all options supported by the adapter.
Code snippets
For readability, we omitted the async function in all code snippets. To run the code, you will need to wrap the code using an async function.
Speech recognition (speech-to-text)
import { createSpeechRecognitionPonyfill } from 'web-speech-cognitive-services/lib/SpeechServices/SpeechToText';
const {
  SpeechRecognition
} = await createSpeechRecognitionPonyfill({
  region: 'westus',
  subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
});
const recognition = new SpeechRecognition();
recognition.interimResults = true;
recognition.lang = 'en-US';
recognition.onresult = ({ results }) => {
  console.log(results);
};
recognition.start();Note: most browsers requires HTTPS or
localhostfor WebRTC.
Integrating with React
You can use react-dictate-button to integrate speech recognition functionality to your React app.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
import DictateButton from 'react-dictate-button';
const {
  SpeechGrammarList,
  SpeechRecognition
} = await createPonyfill({
  region: 'westus',
  subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
});
export default props =>
  <DictateButton
    onDictate={ ({ result }) => alert(result.transcript) }
    speechGrammarList={ SpeechGrammarList }
    speechRecognition={ SpeechRecognition }
  >
    Start dictation
  </DictateButton>Speech synthesis (text-to-speech)
import { createSpeechSynthesisPonyfill } from 'web-speech-cognitive-services/lib/SpeechServices/TextToSpeech';
const {
  speechSynthesis,
  SpeechSynthesisUtterance
} = await createSpeechSynthesisPonyfill({
  region: 'westus',
  subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
});
speechSynthesis.addEventListener('voiceschanged', () => {
  const voices = speechSynthesis.getVoices();
  const utterance = new SpeechSynthesisUtterance('Hello, World!');
  utterance.voice = voices.find(voice => /JessaRUS/u.test(voice.name));
  speechSynthesis.speak(utterance);
});Note:
speechSynthesisis camel-casing because it is an instance.List of supported regions can be found in this article.
pitch, rate, voice, and volume are supported. Only onstart, onerror, and onend events are supported.
Integrating with React
You can use react-say to integrate speech synthesis functionality to your React app.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
import React from 'react';
import Say from 'react-say';
export default class extends React.Component {
  constructor(props) {
    super(props);
    this.state = {};
  }
  async componentDidMount() {
    const ponyfill = await createPonyfill({
      region: 'westus',
      subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
    });
    this.setState(() => ({ ponyfill }));
  }
  render() {
    const {
      state: { ponyfill }
    } = this;
    return (
      ponyfill &&
        <Say
          speechSynthesis={ ponyfill.speechSynthesis }
          speechSynthesisUtterance={ ponyfill.SpeechSynthesisUtterance }
          text="Hello, World!"
        />
    );
  }
}Using authorization token
Instead of exposing subscription key on the browser, we strongly recommend using authorization token.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const ponyfill = await createPonyfill({
  authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
  region: 'westus',
});You can also provide an async function that will fetch the authorization token on-demand. You should cache the authorization token for subsequent request. For simplicity of this code snippets, we are not caching the result.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const ponyfill = await createPonyfill({
  authorizationToken: () => fetch('https://example.com/your-token').then(res => res.text()),
  region: 'westus',
});Note: if you do not specify
region, we will default to"westus".List of supported regions can be found in this article.
If you prefer to use the deprecating Bing Speech, import from
'web-speech-cognitive-services/lib/BingSpeech'instead.
Lexical and ITN support
Lexical and ITN support is unique in Cognitive Services Speech Services. Our adapter added additional properties transcriptITN, transcriptLexical, and transcriptMaskedITN to surface the result, in addition to transcript and confidence.
Biasing towards some words for recognition
In some cases, you may want the speech recognition engine to be biased towards "Bellevue" because it is not trivial for the engine to recognize between "Bellevue", "Bellview" and "Bellvue" (without "e"). By giving a list of words, teh speech recognition engine will be more biased to your choice of words.
Since Cognitive Services does not works with weighted grammars, we built another SpeechGrammarList to better fit the scenario.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const {
  SpeechGrammarList,
  SpeechRecognition
} = await createPonyfill({
  region: 'westus',
  subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
});
const recognition = new SpeechRecognition();
recognition.grammars = new SpeechGrammarList();
recognition.grammars.phrases = ['Tuen Mun', 'Yuen Long'];
recognition.onresult = ({ results }) => {
  console.log(results);
};
recognition.start();Custom Speech support
Please refer to "What is Custom Speech?" for tutorial on creating your first Custom Speech model.
To use custom speech for speech recognition, you need to pass the endpoint ID while creating the ponyfill.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const ponyfill = await createPonyfill({
  region: 'westus',
  speechRecognitionEndpointId: '12345678-1234-5678-abcd-12345678abcd',
  subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
});Custom Voice support
Please refer to "Get started with Custom Voice" for tutorial on creating your first Custom Voice model.
To use Custom Voice for speech synthesis, you need to pass the deployment ID while creating the ponyfill, and pass the voice model name as voice URI.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const ponyfill = await createPonyfill({
  region: 'westus',
  speechSynthesisDeploymentId: '12345678-1234-5678-abcd-12345678abcd',
  subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
});
const { speechSynthesis, SpeechSynthesisUtterance } = ponyfill;
const utterance = new SpeechSynthesisUtterance('Hello, World!');
utterance.voice = { voiceURI: 'your-model-name' };
await speechSynthesis.speak(utterance);Test matrix
For detailed test matrix, please refer to SPEC-RECOGNITION.md or SPEC-SYNTHESIS.md.
Known issues
- Speech recognition- Interim results do not return confidence, final result do have confidence- We always return 0.5for interim results
 
- We always return 
- Cognitive Services support grammar list but not in JSGF format, more work to be done in this area- Although Google Chrome support grammar list, it seems the grammar list is not used at all
 
 
- Interim results do not return confidence, final result do have confidence
- Speech synthesis- onboundary,- onmark,- onpause, and- onresumeare not supported/fired
- pausewill pause immediately and do not pause on word breaks due to lack of boundary
 
Roadmap
- Speech recognition- Add tests for lifecycle events
- Support stop()andabort()function
- Add dynamic phrases
- Add reference grammars
- Add continuous mode
- Investigate support of Opus (OGG) encoding- Currently, there is a problem with microsoft-speech-browser-sdk@0.0.12, tracking on this issue
 
- Currently, there is a problem with 
- Support custom speech
- Support ITN, masked ITN, and lexical output
 
- Speech synthesis- Event: add pause/resumesupport
- Properties: add paused/pending/speakingsupport
- Support custom voice fonts
 
- Event: add 
Contributions
Like us? Star us.
Want to make it better? File us an issue.
Don't like something you see? Submit a pull request.
1 year ago
1 year ago
1 year ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago