unillm
<p align="center"> <a href="https://docs.unillm.ai/" target="_blank"> <img src="https://cdn.pezzo.ai/unillm/logo-light-mode.svg" alt="logo" width="280"> </a> </p>
<p align="center"> <a href="https://docs.unillm.ai/" target="_blank"> <img src="https://cdn.pezzo.ai/unillm/logo-light-mode.svg" alt="logo" width="280"> </a> </p>
OpenTelemetry-native Auto instrumentation library for monitoring LLM Applications, facilitating the integration of observability into your GenAI-driven projects
A Node.js package to send user prompts to various AI models and return the output.
A NodeJS RAG framework to easily work with LLMs and custom datasets
A NodeJS RAG framework to easily work with LLMs and custom datasets
Universal LLM observability & cost intelligence - Monitor OpenAI, Anthropic, Google AI, Cohere and more
A NodeJS RAG framework to easily work with LLMs and custom datasets
Zero-dependency, modular SDK for building robust natural language applications
CMMV module for LLM integration, tokenization, RAG dataset creation, and fast FAISS-based vector search for code indexing.
A unified TypeScript/JavaScript SDK for interacting with multiple AI model providers, including OpenAI, Anthropic, Cohere, Gemini, Mistral, DeepSeek, Llama, and XAI. This SDK provides a consistent interface for generating text and working with various AI
Core functionalities for generative-ts
A Reactive CLI that generates git commit messages with various AI
A NodeJS RAG framework to easily work with LLMs and custom datasets
A Node.js wrapper for logging llm traces directly to Helicone, bypassing the proxy, with OpenLLMetry
A Node.js wrapper for some of Helicone's common functionalities
An SDK written in Typescript for the [Inference Gateway](https://github.com/inference-gateway/inference-gateway).