web-llm-next
Hardware accelerated language model chats on browsers
Hardware accelerated language model chats on browsers
Hardware accelerated language model chats on browsers
Inspired by LangMem, nodemem-js is a fast, in-memory vector database for Node.js, designed for efficient similarity search of vector embeddings. Perfect for building chat agent memory and semantic retrieval systems.
React Native binding of llama.cpp
PromptDesk Javascript SDK
Fork of web-llm where index.ts has default export
React Native binding of llama.cpp
A zero-dependency TypeScript library that automatically maps OpenAPI operations to Gemini Functions declarations.
Hardware accelerated language model chats on browsers
React Native binding of llama.cpp
Hardware accelerated language model chats on browsers
Hardware accelerated language model chats on browsers
Hardware accelerated language model chats on browsers
Generic API to multiple Generative AI models
JavaScript and TypeScript client for Gradient AI
Hardware accelerated language model chats on browsers
DeepInfra API consumer with no APIKEY needed.
Hardware accelerated language model chats on browsers
TVM WASM/WebGPU runtime for JS/TS