1.0.1 • Published 7 months ago

promptleo-client v1.0.1

Weekly downloads
-
License
Apache-2.0
Repository
github
Last release
7 months ago

Promptleo Javascript Client

A lightweight JavaScript client library for interacting with Promptleo AI services. This client provides a simple interface to generate text and images using Stable Diffusion, LLaMA and other models through a unified API.

Installation

npm install promptleo-client

Quick Start

import PromptleoClient from "promptleo-client";

// Initialize the client with your API token
const client = new PromptleoClient({ token: "YOUR_API_TOKEN" });

// Generate an image
const imageResult = await client.generate({
  model: "stability-ai/stable-diffusion-xl-base-1.0",
  prompt: "A simple line drawing of an eagle",
});
console.log("Generated image URL:", imageResult.url);

// Generate text in a conversation format
const chatResult = await client.generate({
  model: "meta/llama-3.1-8b-instruct",
  messages: [{ role: "user", content: "Your question here" }],
});
console.log("Chat response:", chatResult.messages);

// Generate text completion
console.log("\nGenerating text with prompt...");
const promptResult = await client.generate({
  model: "meta/llama-3.2-3b",
  prompt: "I believe the meaning of life is",
});
console.log("Generated text:", promptResult.messages);

Changelog

1.0.0 - Initial release.

API Reference

Constructor

const client = new PromptleoClient({ token: "YOUR_API_TOKEN" });
  • token (string, required): Your API authentication token is available on the account page.

Methods

generate(params)

Generic method to generate content using various AI models.

const result = await client.generate({
  model: string,  // required
  prompt?: string,  // required for image generation
  messages?: Array  // required for chat models
})

Parameters:

  • model (string, required): The model identifier
  • prompt (string, optional): The generation prompt used in case of image generation and text completion requests.
  • messages (array, optional): Array of message objects for chat models.

Returns: Promise that resolves to the API response

Supported Models

NameIdentifier
FLUX.1 schnellblack-forest-labs/flux.1-schnell
Stable Diffusion v3.5 Large Turbostability-ai/stable-diffusion-v3.5-large-turbo
Stable Diffusion v3.5 Largestability-ai/stable-diffusion-v3.5-large
Stable Diffusion XL Base 1.0stability-ai/stable-diffusion-xl-base-1.0
Stable Diffusion 1.5stability-ai/stable-diffusion-v1.5
Stable Diffusion 1.4stability-ai/stable-diffusion-v1.4
Qwen2.5 14B Instructqwen/qwen2.5-14b-instruct
Meta Llama 3.1 8Bmeta/llama-3.1-8b
Meta Llama 3.1 8B Instructmeta/llama-3.1-8b-instruct
Meta Llama 3.2 3Bmeta/llama-3.2-3b
Meta Llama 3.2 3B Instructmeta/llama-3.2-3b-instruct

Image Generation

Generates images from text descriptions (text-to-image).

const result = await client.generate({
  model: "stability-ai/stable-diffusion-xl-base-1.0",
  prompt: "Your image description",
});

Response format:

{
  url: string; // URL of the generated image
}

Text Generation

Generates text responses using messages in chat format.

const result = await client.generate({
  model: "meta/llama-3.1-8b-instruct",
  messages: [
    {
      role: "user",
      content: "Your question or prompt",
    },
  ],
});

Response format:

{
  messages: [
    {
      role: string,
      content: string,
    },
  ];
}

Generate text completion using the base model and a prompt.

const result = await client.generate({
  model: "meta/llama-3.2-3b",
  prompt: "I believe the meaning of life is",
});

Response format:

{
  messages: [
    {
      generated_text: string,
    },
  ];
}

Error Handling

The client throws errors in the following cases:

  • Missing authentication token
  • Missing required parameters (model, prompt/messages)
  • API request failures
  • Network errors
  • Invalid responses

Example error handling:

try {
  const result = await client.generate({
    model: "stability-ai/stable-diffusion-xl-base-1.0",
    prompt: "An image description",
  });
} catch (error) {
  console.error("Generation failed:", error.message);
}

Bug Reports and Feature Requests

Please file a ticket here.

Development

# Install dependencies
npm install

# Start development server
npm run dev

# Build for production
npm run build

# Run sample code, but specify the api token in the src/sample.js file first
npm run sample

License

Apache-2.0

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.