hyllama
llama.cpp gguf file parser for javascript
llama.cpp gguf file parser for javascript
use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.
React Native binding of llama.cpp
serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
Control what LLMs can, and can't, say
<a href="https://www.npmjs.com/package/contort"><img alt="Latest Contortionist NPM Version" src="https://badge.fury.io/js/contort.svg" /></a> <a href="https://github.com/thekevinscott/contortionist/blob/master/LICENSE"><img alt="License for contortionist"
A library for generating syntactically valid code from an LLM.