@lenml/tokenizer-gpt4
gpt4 tokenizer for NodeJS/Browser
gpt4 tokenizer for NodeJS/Browser
gpt4o tokenizer for NodeJS/Browser
internlm2 tokenizer for NodeJS/Browser
llama2 tokenizer for NodeJS/Browser
llama3 tokenizer for NodeJS/Browser
llama3_1 tokenizer for NodeJS/Browser
environment wrapper, supports all JS environment including node, deno, bun, edge runtime, and cloudflare worker
environment wrapper, supports all JS environment including node, deno, bun, edge runtime, and cloudflare worker
React Native binding of llama.cpp
A lightweight and intuitive Node.js client for the Paxsenix AI API.
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
CLI for managing AI agents and their components
Ollama provider for Agenite
React Native binding of llama.cpp
Nyaa~! A TypeScript client for OpenRouter that's both kawaii and powerful! 😻
#### Description I needed the SentenceSplitter from llamaindex but had to import the entire llamaindex package which is 1GB. I pulled it out and had GPT make a standalone version. It's not exactly the same but close.
CMMV module for LLM integration, tokenization, RAG dataset creation, and fast FAISS-based vector search for code indexing.
A unified TypeScript/JavaScript SDK for interacting with multiple AI model providers, including OpenAI, Anthropic, Cohere, Gemini, Mistral, DeepSeek, Llama, and XAI. This SDK provides a consistent interface for generating text and working with various AI
[](https://www.npmjs.com/package/llamaindex) [](https://www.npmjs.com/package/llamaindex) [![NPM Downloads](https://img.shields.io/npm/dm/llamain
Native Node.JS plugin to run LLAMA inference directly on your machine with no other dependencies.