@nuecms/weapp-sdk
A flexible and lightweight WeChat Mini Program SDK for Node.js
A flexible and lightweight WeChat Mini Program SDK for Node.js
A flexible and lightweight WeChat Official Accounts SDK for Node.js
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Node.js binding for huggingface/tokenizers library
Semantically create chunks from large texts. Useful for workflows involving large language models (LLMs).
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Easy, fast and WASM/WebGPU accelerated vector embedding for the web platform. Locally via ONNX/Transformers.js and via API. Compatible with Browsers, Workers, Web Extensions, Node.js & co.
Easy, fast and WASM/WebGPU accelerated vector embedding for the web platform. Locally via ONNX/Transformers.js and via API. Compatible with Browsers, Workers, Web Extensions, Node.js & co.
A lightweight and efficient vector database for storing and searching text embeddings in the browser's local storage. The package uses Transformers.js to generate embeddings for text documents and provides functionality for similarity search, filtering, a
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
General-purpose TypeScript plugins/transformers
Core logic and types for working with transformers
Logic for evaluating and executing transformers
EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Chroma's fork of @xenova/transformers serving as our default embedding function
Semantically create chunks from large texts. Useful for workflows involving large language models (LLMs).