@fugood/node-llama-darwin-arm64
Native module for An another Node binding of llama.cpp (darwin-arm64)
Native module for An another Node binding of llama.cpp (darwin-arm64)
Native module for An another Node binding of llama.cpp (darwin-x64)
Native module for An another Node binding of llama.cpp (linux-arm64)
Native module for An another Node binding of llama.cpp (linux-arm64-cuda)
Native module for An another Node binding of llama.cpp (linux-arm64-vulkan)
OpenAPI definitions and converters for 'typia' and 'nestia'.
Prompt utilities for llama-guard. Use MLCommons taxonomies or build your own safety categories.
Superfast runtime validators with only one line
VSCode extension that acts as a Model Context Protocol (MCP) client, enabling integration between MCP servers and GitHub Copilot Chat
A friendly and digestible way to using OpenAI Assistants API
environment wrapper, supports all JS environment including node, deno, bun, edge runtime, and cloudflare worker
environment wrapper, supports all JS environment including node, deno, bun, edge runtime, and cloudflare worker
An attempt at a pure cpp turbo module library
Our library `@lenml/llama2-tokenizer` has been deprecated. We are excited to introduce our new library `@lenml/tokenizers` as its replacement, offering a broader set of features and an enhanced experience.