@duck4i/llama
Native Node.JS plugin to run LLAMA inference directly on your machine with no other dependencies.
Native Node.JS plugin to run LLAMA inference directly on your machine with no other dependencies.
An another Node binding of llama.cpp
Native module for An another Node binding of llama.cpp (darwin-arm64)
Native module for An another Node binding of llama.cpp (darwin-x64)
Native module for An another Node binding of llama.cpp (linux-arm64)
Native module for An another Node binding of llama.cpp (linux-arm64-cuda)
Native module for An another Node binding of llama.cpp (linux-arm64-vulkan)
Native module for An another Node binding of llama.cpp (linux-x64)
Native module for An another Node binding of llama.cpp (linux-x64-cuda)
Native module for An another Node binding of llama.cpp (linux-x64-vulkan)
Native module for An another Node binding of llama.cpp (win32-arm64)
Native module for An another Node binding of llama.cpp (win32-arm64-vulkan)
Native module for An another Node binding of llama.cpp (win32-x64)
Native module for An another Node binding of llama.cpp (win32-x64-cuda)
Native module for An another Node binding of llama.cpp (win32-x64-vulkan)
Core functionalities for generative-ts
A fork of the LlamaindexTS repo llamaindex package
Load and use an LLM model directly in Electron. Experimental.