0.1.1 • Published 8 months ago

@toolkitai/magnetic v0.1.1

Weekly downloads
-
License
-
Repository
-
Last release
8 months ago

magnetic by Toolkit

magnetic is a way to run shareable, and reusable tool-powered LLM / agent workflows. We think of it as an agent/workflow runner & package manager for AI tools and workflows, that can easily be run, expanded and combined to powerful effect.

Here's a few examples to give you the vibe:

Magnetic is still in Beta and may change at any time.

Installation

  1. Clone the repository:

    git clone https://github.com/toolkit-ai/magnet-cli-v2.git
  2. Navigate to the project directory:

    cd magnet-cli-v2
  3. Install dependencies:

    bun install

Usage

magnet-context offers several commands:

  • run: Executes a workflow
  • create-recipe: Creates a new recipe (composition of workflows)
  • list-recipes: Lists available recipes (compositions of workflows)
  • list-models: Lists available AI models
  • list-workflows: Lists available workflows
  • create-workflow: Creates a new workflow file
  • pipe: Runs a series of workflows, piping the output of each to the next similar to the UNIX pipe command
  • agent: (Coming soon, an agent that composes and runs workflows for you)

Example usage:

# Create a new recipe
bun run start create-recipe

# Run a workflow from a recipe file
bun run start run --recipe recipes/project-overview.yaml

# Run a workflow with CLI parameters
bun run start run --workflow generate-knowledge-from-codebase --context.directory . --context.instructions "Please write a project overview" --metadata.title "Project Overview" --metadata.description "Writes an overview of the current project including general pattern, stack, and structure"

# List available workflows
bun run start list-workflows

# Create a new workflow
bun run start create-workflow

# Pipe multiple workflows together
bun run start pipe read-file count-words write-file

Development

To run tests:

bun run test

Configuration

The tool supports various configuration options, including:

  • Debug mode
  • Custom directories for output and logs
  • AI model selection

These can be set via command-line options or environment variables.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.