node-red-contrib-ml-rag
Retrieval Augmented Generation (RAG) and local GPT (text generation LLM - large language models) toolkit for machine learning (ML) apps with node-red
Retrieval Augmented Generation (RAG) and local GPT (text generation LLM - large language models) toolkit for machine learning (ML) apps with node-red
Add AI functionality to your flows! This module includes a set of nodes that enable easy communication with Ollama, enriching your projects with intelligent solutions.
Inspired by LangMem, nodemem-js is a fast, in-memory vector database for Node.js, designed for efficient similarity search of vector embeddings. Perfect for building chat agent memory and semantic retrieval systems.
Phospho, React components to record user feedback
Phospho, the LLM analytics platform
A protocol for large scale Interplanetary Intertool Agent Context
Specify what you want it to build and then Ai builds it.
🔑 Unimpeded: Convert Poe.com to OpenAI Interface-Compatible Format! 畅通无阻: 将 Poe.com 转换为 OpenAI 接口兼容格式!
A modern flow framework for pocket-sized applications and LLM orchestration
PETR allows you to run multiple models with the same prompt and then compare the results to each other. Currently designed to be used with [LangChain.js](https://github.com/hwchase17/langchainjs).
a glorified ai-driven switch statement
An implementation of the Poe Bot Protocol in NodeJS.
A Node.js package to interact with the Peslac API for document processing.
CLI tool that pipes text into LLM agents from different providers
This library enables users to generate Playwright TypeScript tests from manual tests written in natural language by leveraging a local Large Language Model (LLM). However, it is crucial to note that it is still in its experimental phase and should be util
Connect to and stream from any OpenAI/Anthropic API
CLI tool for automated PR reviews using LLM
QLLM Samples
QLLM CLI: A versatile CLI tool for interacting with multiple AI/LLM providers. Features include chat sessions, one-time queries, image handling, and conversation management. Streamlines AI development with easy provider/model switching and configuration.
Core library providing robust AI engineering functionalities tailored for Large Language Model (LLM) applications, enabling developers to build, deploy, and optimize AI solutions with ease.