0.1.4-alpha.12 • Published 5 months ago

ai-slot-web-sdk v0.1.4-alpha.12

Weekly downloads
-
License
Apache-2.0
Repository
-
Last release
5 months ago

Ai-Slot-Web-SDK: Engage in real-time interactions with your favorite characters directly in your web browser.

Get started

Here are some examples that demonstrate the use of TypeScript bindings.

import { AiSlotClient } from 'ai-slot-web-sdk';
import { GetResponseResponse } from "ai-slot-web-sdk/dist/_proto/service/service_pb";

// Initiate the ai-slot client.
const aiSlotClient = useRef(null);
aiSlotClient.current = new AiSlotClient({
      host: string, //Enter the host https://webstream.ai-slot.ai  
      apiKey: string, //Enter your API Key here,
      characterId: string, //Enter your Character ID,
      enableAudio: boolean, //Character audio is generated but will not be played.
      sessionId: string, //The ongoing conversation session, which can be used to access chat history.
      languageCode?: string, 
      textOnlyResponse?: boolean, //An optional parameter for chat-only applications (no audio response from the character).
      micUsage?: boolean, //An optional parameter to disable microphone usage and access.
      enableFacialData?: boolean, //An optional parameter for generating viseme data for lipsync and facial expressions.
      faceModel?: 3, //OVR lipsync
 })

// Register a callback for incoming responses. Keep in mind that it may be triggered multiple times if the response arrives in segments.
aiSlotClient.setResponseCallback((response: GetResponseResponse) => {
    // live transcript, only available during audio mode.
    if (response.hasUserQuery()) {
        var transcript = response!.getUserQuery();
        var isFinal = response!.getIsFinal();
    }
    if (response.hasAudioResponse()) {
        var audioResponse = response?.getAudioResponse();
        if (audioResponse.hasTextData()) {
            // Response text.
            console.log(audioResponse?.getTextData());
        }
        if (audioResponse.hasAudioData()) {
            // Play or process audio response.
            var audioByteArray: UInt8Array = audioResponse!.getAudioData_asU8();
        }
    }

});

// Send text input
var text = "How are you?";
aiSlotClient.sendTextChunk(text);

// Send audio chunks.
// Starts audio recording using default microphone.
aiSlotClient.startAudioChunk();

// Stop recording and finish submitting input.
aiSlotClient.endAudioChunk();

// End or Reset a conversation session.
aiSlotClient.resetSession();

Facial Expressions

To enable facial expression functionality, initialize the AiSlotClient with the required parameters. Make sure to set the enableFacialData flag to true so that facial expression data is activated.

aiSlotClient.current = new AiSlotClient({
  host: 'https://webstream.ai-slot.ai',  
  apiKey: '<apiKey>',
  characterId: '<characterId>',
  enableAudio: true,
  enableFacialData: true,
  faceModel: 3, // OVR lipsync
});
0.1.4-alpha.12

5 months ago

0.1.4-alpha.10

5 months ago

0.1.4-alpha.11

5 months ago

0.1.4-alpha.9

6 months ago

0.1.4-alpha.8

6 months ago

0.1.4-alpha.7

6 months ago

0.1.4-alpha.6

6 months ago

0.1.4-alpha.5

6 months ago

0.1.4-alpha.4

6 months ago

0.1.4-alpha.3

6 months ago

0.1.4-alpha.2

6 months ago

0.1.4-alpha.1

6 months ago

0.1.4

6 months ago