second-brain-ts
Backend for your Personal Assistant, powered by Private AI (aka PAPA)
Backend for your Personal Assistant, powered by Private AI (aka PAPA)
DataStax RAGStack TS
🚀 Accelerate LLM application development
Connect to the ITTIA APIs or self-hosted ones
A NodeJS RAG framework to easily work with LLMs and custom datasets
MCP server for Langflow RAG integration with Cursor
Quickest way to production grade RAG UI.
A NodeJS RAG framework to easily work with LLMs and custom datasets
The memory layer for Personalized AI in Typescript
An MCP Server for RAG Web Browser Actor
A Model Context Protocol server for fetching and storing documentation in a vector database, enabling semantic search and retrieval to augment LLM capabilities with relevant documentation context.
MCP server for integrating Tavily search API with LLMs, providing web search, RAG context generation, and QnA capabilities
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
The TypeScript Code Extractor and Analyzer can be handy for RAG (Retrieval-Augmented Generation) systems for codebases. It provides a detailed and structured representation of the codebase that can be converted into embeddings, enabling more effective adv
Client for the Vectorize API
A customizable React chat widget for RAG applications
A framework for connecting your data to large language models
CLI tool for syncing local files to Needle
CLI tool for syncing local files to Needle
An MCP server for semantic documentation search and retrieval using vector databases to augment LLM capabilities.