0.1.14 β€’ Published 5 months ago

create-localai v0.1.14

Weekly downloads
-
License
MIT
Repository
github
Last release
5 months ago

LocalAI Framework

Build AI apps fasterβ€”no LLM API keys, no cloud costs, no boilerplate. Just code.

LocalAI Framework is a zero-config, AI-native web framework that makes it easy to build applications with embedded LLMs. It provides a unified API for text generation, embeddings, and agentic workflows, all running locally on your machine.

Features

  • πŸš€ Zero Configuration: Get started in seconds with our CLI
  • πŸ€– Embedded LLM: Ships with TinyLlama for instant local inference
  • πŸ”Œ Unified API: Simple React hooks for AI functionality
  • πŸ’» Local-First: No API keys or cloud costs required
  • πŸ”„ Hybrid Mode: Optional cloud provider fallback
  • πŸ›  Developer Tools: Built-in AI playground and performance monitoring

Quick Start

# Create a new project
npx create-localai@latest my-ai-app

# Navigate to the project
cd my-ai-app

# Start the development server
npm run dev

Usage

import { useLLM } from '@localai/framework';

function MyAIComponent() {
  const { generate, isLoading } = useLLM();

  const handleClick = async () => {
    const response = await generate({
      prompt: "Write a short sci-fi story."
    });
    console.log(response.text);
  };

  return (
    <button onClick={handleClick} disabled={isLoading}>
      Generate Story
    </button>
  );
}

Configuration

// _app.tsx or similar
import { LLMProvider } from '@localai/framework';

function MyApp({ Component, pageProps }) {
  return (
    <LLMProvider config={{ model: 'tinyllama', temperature: 0.7 }}>
      <Component {...pageProps} />
    </LLMProvider>
  );
}

Advanced Features

Agentic Workflows

import { defineAgent } from '@localai/framework';

const CodeAgent = defineAgent({
  role: "Senior Developer",
  tools: ['writeFile', 'runTests'],
  model: "phind-codellama"
});

// Use the agent
const result = await CodeAgent.execute("Refactor this function to use async/await");

RAG (Coming Soon)

import { useRAG } from '@localai/framework';

const { query } = useRAG({
  documents: ['doc1.pdf', 'doc2.pdf'],
  model: 'tinyllama'
});

const answer = await query("What do the documents say about X?");

Contributing

We welcome contributions! Please see our Contributing Guide for details.

License

MIT Β© LocalAI Team

Support

0.1.14

5 months ago

0.1.13

5 months ago

0.1.12

5 months ago

0.1.11

5 months ago

0.1.10

5 months ago

0.1.9

5 months ago

0.1.8

5 months ago

0.1.7

5 months ago

0.1.6

5 months ago

0.1.5

5 months ago

0.1.4

5 months ago

0.1.3

5 months ago

0.1.2

5 months ago