0.5.0 • Published 6 months ago

@agenite/agent v0.5.0

Weekly downloads
-
License
MIT
Repository
github
Last release
6 months ago

@agenite/agent

A powerful and flexible TypeScript library for building AI agents with advanced tool integration and state management capabilities.

Features

  • 🛠️ Advanced tool integration - Seamlessly integrate custom tools and APIs with type safety
  • 🔄 Stateful conversations - Built-in state management with custom reducers
  • 🌊 Streaming support - Real-time streaming of agent responses and tool executions
  • 🎯 Execution context - Track and manage nested agent executions with context inheritance
  • 🔌 Provider agnostic - Support for multiple LLM providers (Ollama, Bedrock)
  • 🎨 Flexible architecture - Build simple to complex agent hierarchies with middleware support
  • 📊 Token usage tracking - Monitor and optimize token consumption across executions
  • 🔄 Step-based execution - Fine-grained control over agent execution flow

Installation

npm install @agenite/agent

Quick start

import { Agent } from '@agenite/agent';
import { OllamaProvider } from '@agenite/ollama';

// Create a simple calculator tool
const calculatorTool = new Tool({
  name: 'calculator',
  description: 'Perform basic math operations',
  execute: async ({ input }) => {
    // Tool implementation
    return { success: true, data: result.toString() };
  },
});

// Initialize the agent
const agent = new Agent({
  name: 'math-buddy',
  provider: new OllamaProvider({ model: 'llama2' }),
  tools: [calculatorTool],
  instructions: 'You are a helpful math assistant.',
});

// Execute the agent
const result = await agent.execute({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'What is 1234 * 5678?' }] }],
});

Core concepts

Agent

The main class that orchestrates interactions between the LLM and tools. It handles:

  • Message processing and state management
  • Tool execution and result handling
  • Response streaming
  • Nested agent execution
  • Token usage tracking

Tools

Tools are functions that agents can use to perform specific tasks. Each tool has:

  • Name and description
  • Input schema with type safety
  • Execute function with context support
  • Error handling capabilities

Providers

LLM providers that handle the actual language model interactions:

  • Ollama
  • Amazon Bedrock
  • Extensible for other providers

Steps

The agent execution is broken down into steps:

  • llm-call - Handles LLM interactions
  • tool-call - Manages tool execution
  • tool-result - Processes tool results
  • agent-call - Handles nested agent execution

Advanced usage

Stateful agent with custom reducer

const customReducer = {
  messages: (newValue, previousValue) => [
    ...(previousValue || []),
    ...(newValue || []),
  ],
  runningTotal: (newValue, previousValue) =>
    (previousValue || 0) + (newValue || 0),
};

const agent = new Agent({
  name: 'stateful-calculator',
  provider,
  tools: [calculatorTool],
  stateReducer: customReducer,
  initialState: {
    runningTotal: 0,
  },
  instructions:
    'You are a helpful math assistant that maintains a running total.',
});

Nested agents with delegation

// Specialist agents
const calculatorAgent = new Agent({
  name: 'calculator-specialist',
  provider,
  tools: [calculatorTool],
  description: 'Specializes in mathematical calculations',
});

const weatherAgent = new Agent({
  name: 'weather-specialist',
  provider,
  tools: [weatherTool],
  description: 'Provides weather information',
});

// Coordinator agent
const coordinatorAgent = new Agent({
  name: 'coordinator',
  provider,
  agents: [calculatorAgent, weatherAgent],
  instructions:
    'Coordinate between specialist agents to solve complex problems.',
});

Streaming responses with middleware

const agent = new Agent({
  name: 'streaming-agent',
  provider,
  tools: [calculatorTool],
  middlewares: [
    executionContextInjector(),
    // Add custom middleware here
  ],
});

const iterator = agent.iterate({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'Your query here' }] }],
  stream: true,
});

for await (const chunk of iterator) {
  switch (chunk.type) {
    case 'agenite.llm-call.streaming':
      console.log(chunk.content);
      break;
    case 'agenite.tool-call.params':
      console.log('Using tool:', chunk.toolUseBlocks);
      break;
    case 'agenite.tool-result':
      console.log('Tool result:', chunk.result);
      break;
  }
}

API reference

Agent constructor

new Agent({
  name: string;
  provider: LLMProvider;
  tools?: Tool[];
  instructions?: string;
  description?: string;
  agents?: Agent[];
  stateReducer?: CustomStateReducer;
  initialState?: Partial<StateFromReducer<CustomStateReducer>>;
  steps?: Steps;
  middlewares?: Middlewares;
})

Execute method

execute({
  messages: BaseMessage[];
  stream?: boolean;
  context?: Record<string, unknown>;
}): Promise<ExecutionResult>

Iterate method

iterate({
  messages: BaseMessage[];
  stream?: boolean;
  context?: Record<string, unknown>;
}): AsyncIterator<StreamChunk>

Examples

Check out the examples directory for more detailed examples:

  • basic/ - Simple examples showing core functionality
    • simple-chat.ts - Basic chat agent with calculator tool
  • advanced/ - More complex examples
    • nested-agents.ts - Agent composition and delegation
    • stateful-agent.ts - Maintaining conversation state
    • streaming-agent.ts - Real-time response streaming
    • multi-tool-agent.ts - Using multiple tools

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT

0.5.0

6 months ago

0.4.2

6 months ago

0.4.1

7 months ago

0.4.0

7 months ago

0.4.0-alpha.0

7 months ago

0.3.0

7 months ago

0.2.5

8 months ago

0.2.4

8 months ago

0.2.3

8 months ago

0.2.2

8 months ago

0.2.1

8 months ago

0.2.0

8 months ago

0.1.2

8 months ago

0.1.1

9 months ago

0.1.0

9 months ago