offload-ai
Offload is an in-browser AI stack. It provides an SDK to run AI inference directly on your users' devices, increasing their data privacy and saving on inference costs.
Offload is an in-browser AI stack. It provides an SDK to run AI inference directly on your users' devices, increasing their data privacy and saving on inference costs.
OmiAI - A highly opinionated AI SDK with auto-model selection, built-in reasoning and curated tools
Unofficial CLI for working with notion AI
Make your app understand language. Summarize conversations, categorize articles, and more.
A MCP server connecting to a managed index on LlamaCloud
A toolkit for building onchain AI agents
Unofficial provider for working with notion AI
JS fetch wrapper for consuming the Ollama API in node and the browser
A web UI for Ollama
REST API proxy to Vertex AI with the interface of ollama. HTTP server for accessing Vertex AI via the REST API interface of ollama.
Vercel AI Provider for running LLMs locally using Ollama
Vercel AI Provider for running LLMs locally using Ollama
OllamaApiFacadeJS is an open-source library for running an ExpressJS backend as an Ollama API using LangChainJS. It supports local language models services like LmStudio and allows seamless message conversion and streaming between LangChainJS and Ollama c
A CLI tool to benchmark Ollama models performance
Allow websites to access your locally running Ollama instance.
Commit message generator with Ollama
Okesa: LLM-powered Natural Language Processing 💬
A Model Context Protocol server implementation for Nx
Unofficial Character AI wrapper for node.
VSCode extension for AI-powered commit message generation with customizable providers