0.0.4 • Published 1 year ago

language-model-plugin v0.0.4

Weekly downloads
-
License
MIT
Repository
github
Last release
1 year ago

language-model-plugin

Enabled on device inferencing using Mediapipe LLM.

Install

npm install language-model-plugin
npx cap sync

API

generate(...)

generate(options: { value: string; }) => Promise<{ value: string; }>
ParamType
options{ value: string; }

Returns: Promise<{ value: string; }>


generateStreaming(...)

generateStreaming(options: { value: string; }) => Promise<{ value: string; }>
ParamType
options{ value: string; }

Returns: Promise<{ value: string; }>


In the capacitor app (not the plugin) add a supported model to the copy bundle resources. App, Target, Build Phases, Copy Build Resources, then add the model file. You can download the model from kaggle. https://www.kaggle.com/models/google/gemma/tfLite/gemma-2b-it-gpu-int4

Usage

Here’s a simple example of how to use the plugin in your application:

import { Inference } from 'language-model-plugin';
async function doInference(e){
        e.preventDefault();
        setMessageStream([]);
        const result = await Inference.generate({"value":promptRef.current.value});
        setLastMessage(result.value);
    }

The plugin also supports streaming responses:

import { useState, useEffect, useRef } from 'react';
import { Inference } from 'language-model-plugin';

function AIModel() {

    const [messageStream, setMessageStream] = useState([]);
    const promptRef = useRef();

    async function doStreamingInference(e){
        e.preventDefault();
       await Inference.generateStreaming({"value":promptRef.current.value}); 
    }

    useEffect(() => {
        Inference.addListener("llm_partial",(result) => {
            console.log(result);
            setMessageStream(prevData => [...prevData, result.value]);  
        });
        Inference.addListener("llm_start",(result) => {
            console.log(result)
            setMessageStream([]);  
          });
    }, []);
    return (
        <>
            <input type="text" ref={promptRef}></input>
            <button onClick={doStreamingInference}>streaming inference</button>
            <span>{messageStream.join("")}</span>
        </>
    )
}

export default AIModel
0.0.4

1 year ago

0.0.3

1 year ago

0.0.2

1 year ago

0.0.1

1 year ago