1.11.6 • Published 8 months ago

@aigne/example-workflow-group-chat v1.11.6

Weekly downloads
-
License
MIT
Repository
github
Last release
8 months ago

Workflow Group Chat Demo

This is a demonstration of using AIGNE Framework to build a group chat workflow. The example now supports both one-shot and interactive chat modes, along with customizable model settings and pipeline input/output.

flowchart LR

manager(Group Manager)
user(User)
writer(Writer)
editor(Editor)
illustrator(Illustrator)

manager ==2 request to speak==> writer
manager --4 request to speak--> illustrator

writer -.3 group message.-> manager
writer -..-> editor
writer -..-> illustrator
writer -..-> user


classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;

class manager inputOutput
class user processing
class writer processing
class editor processing
class illustrator processing

Prerequisites

  • Node.js and npm installed on your machine
  • An OpenAI API key for interacting with OpenAI's services
  • Optional dependencies (if running the example from source code):
    • Bun for running unit tests & examples
    • Pnpm for package management

Quick Start (No Installation Required)

export OPENAI_API_KEY=YOUR_OPENAI_API_KEY # Set your OpenAI API key

# Run in one-shot mode (default)
npx -y @aigne/example-workflow-group-chat

# Run in interactive chat mode
npx -y @aigne/example-workflow-group-chat --chat

# Use pipeline input
echo "Write a short story about space exploration" | npx -y @aigne/example-workflow-group-chat

Installation

Clone the Repository

git clone https://github.com/AIGNE-io/aigne-framework

Install Dependencies

cd aigne-framework/examples/workflow-group-chat

pnpm install

Setup Environment Variables

Setup your OpenAI API key in the .env.local file:

OPENAI_API_KEY="" # Set your OpenAI API key here

Run the Example

pnpm start # Run in one-shot mode (default)

# Run in interactive chat mode
pnpm start -- --chat

# Use pipeline input
echo "Write a short story about space exploration" | pnpm start

Run Options

The example supports the following command-line parameters:

ParameterDescriptionDefault
--chatRun in interactive chat modeDisabled (one-shot mode)
--model <provider[:model]>AI model to use in format 'provider:model' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini'openai
--temperature <value>Temperature for model generationProvider default
--top-p <value>Top-p sampling valueProvider default
--presence-penalty <value>Presence penalty valueProvider default
--frequency-penalty <value>Frequency penalty valueProvider default
--log-level <level>Set logging level (ERROR, WARN, INFO, DEBUG, TRACE)INFO
--input, -i <input>Specify input directlyNone

Examples

# Run in chat mode (interactive)
pnpm start -- --chat

# Set logging level
pnpm start -- --log-level DEBUG

# Use pipeline input
echo "Write a short story about space exploration" | pnpm start

License

This project is licensed under the MIT License.

1.11.6

8 months ago

1.11.5

8 months ago

1.11.4

8 months ago

1.11.3

8 months ago

1.11.2

8 months ago

1.11.1

8 months ago

1.11.0

8 months ago

1.10.1

8 months ago

1.10.0

8 months ago

1.9.1

9 months ago

1.9.0

9 months ago

1.8.1

9 months ago

1.8.0

9 months ago

1.7.3

9 months ago

1.7.2

10 months ago

1.7.1

10 months ago

1.6.0

10 months ago

1.5.0

10 months ago

1.4.0

10 months ago

1.3.1

10 months ago

1.3.0

10 months ago

1.2.0

10 months ago