2.0.1 • Published 28 days ago

deepinfra-api v2.0.1

Weekly downloads
-
License
MIT
Repository
-
Last release
28 days ago

npm npm Maintainability Rating Reliability Rating Security Rating Vulnerabilities Code Smells Bugs Technical Debt Lines of Code Duplicated Lines (%) Quality Gate Status

deepinfra-api

Simple DeepInfra API Wrapper - simple interface to use DeepInfra's Inference API. Check out the docs from here.

Installation

npm install deepinfra-api

Usage

Use text generation models

The Mixtral mixture of expert model, developed by Mistral AI, is an innovative experimental machine learning model that leverages a mixture of 8 experts (MoE) within 7b models. Its release was facilitated via a torrent, and the model's implementation remains in the experimental phase._

import { Mixtral } from "deepinfra-api";
const modelName = "mistralai/Mixtral-8x22B-Instruct-v0.1";
const apiKey = "YOUR_DEEPINFRA_API_KEY";
const main = async () => {
  const mixtral = new Mixtral(Mixtral, apiKey);
  const body = {
    input: "What is the capital of France?",
  };
  const output = await mixtral.generate(body);
  const text = output.results[0].generated_text;
  console.log(text);
};

main();

Use text embedding models

Gte Base is an text embedding model that generates embeddings for the input text. The model is trained by Alibaba DAMO Academy.

import { GteBase } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "thenlper/gte-base";
const main = async () => {
  const gteBase = new Embeddings(modelName, apiKey);
  const body = {
    inputs: [
      "What is the capital of France?",
      "What is the capital of Germany?",
      "What is the capital of Italy?",
    ],
  };
  const output = await gteBase.generate(body);
  const embeddings = output.embeddings[0];
  console.log(embeddings);
};

main();

Use SDXL to generate images

SDXL requires unique parameters, therefore it requires different initialization.

import { Sdxl } from "deepinfra-api";
import axios from "axios";
import fs from "fs";

const apiKey = "YOUR_DEEPINFRA_API_KEY";

const main = async () => {
  const model = new Sdxl(apiKey);

  const input = {
    prompt: "The quick brown fox jumps over the lazy dog with",
  };
  const response = await model.generate({ input });
  const { output } = response;
  const image = output[0];

  await axios.get(image, { responseType: "arraybuffer" }).then((response) => {
    fs.writeFileSync("image.png", response.data);
  });
};

main();

Use other text to image models

import { TextToImage } from "deepinfra-api";
import axios from "axios";
import fs from "fs";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "stabilityai/stable-diffusion-2-1";
const main = async () => {
  const model = new TextToImage(modelName, apiKey);
  const input = {
    prompt: "The quick brown fox jumps over the lazy dog with",
  };

  const response = await model.generate(input);
  const { output } = response;
  const image = output[0];

  await axios.get(image, { responseType: "arraybuffer" }).then((response) => {
    fs.writeFileSync("image.png", response.data);
  });
};
main();

Use automatic speech recognition models

import { AutomaticSpeechRecognition } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "openai/whisper-base";

const main = async () => {
  const model = new AutomaticSpeechRecognition(modelName, apiKey);

  const input = {
    audio: path.join(__dirname, "audio.mp3"),
  };
  const response = await model.generate(input);
  const { text } = response;
  console.log(text);
};

main();

Use object detection models

import { ObjectDetection } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "hustvl/yolos-tiny";
const main = async () => {
  const model = new ObjectDetection(modelName, apiKey);

  const input = {
    image: path.join(__dirname, "image.jpg"),
  };
  const response = await model.generate(input);
  const { results } = response;
  console.log(results);
};

Use token classification models

import { TokenClassification } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "Davlan/bert-base-multilingual-cased-ner-hrl";

const main = async () => {
  const model = new TokenClassification(modelName, apiKey);

  const input = {
    text: "The quick brown fox jumps over the lazy dog",
  };
  const response = await model.generate(input);
  const { results } = response;
  console.log(results);
};

Use fill mask models

import { FillMask } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "GroNLP/bert-base-dutch-cased";

const main = async () => {
  const model = new FillMask(modelName, apiKey);

  const body = {
    input: "Ik heb een [MASK] gekocht.",
  };

  const { results } = await model.generate(body);
  console.log(results);
};

Use image classification models

import { ImageClassification } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "google/vit-base-patch16-224";

const main = async () => {
  const model = new ImageClassification(modelName, apiKey);

  const body = {
    image: path.join(__dirname, "image.jpg"),
  };

  const { results } = await model.generate(body);
  console.log(results);
};

Use zero-shot image classification models

import { ZeroShotImageClassification } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "openai/clip-vit-base-patch32";

const main = async () => {
  const model = new ZeroShotImageClassification(modelName, apiKey);

  const body = {
    image: path.join(__dirname, "image.jpg"),
    candidate_labels: ["dog", "cat", "car"],
  };

  const { results } = await model.generate(body);
  console.log(results);
};

Use text classification models

import { TextClassification } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "ProsusAI/finbert";

const misc = async () => {
  const model = new TextClassification(apiKey);

  const body = {
    input:
      "DeepInfra emerges from stealth with $8M to make running AI inferences more affordable",
  };

  const { results } = await model.generate(body);
  console.log(results);
};

Contributors

Oguz Vuruskaner

Iskren Ivov Chernev

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

License

This project is licensed under the MIT License - see the LICENSE file for details.

2.0.1

28 days ago

2.0.0

1 month ago

2.0.0-rc

1 month ago

1.6.2

2 months ago

1.6.1

2 months ago

1.4.3

2 months ago

1.6.0

2 months ago

1.5.1

2 months ago

1.5.0

2 months ago

1.4.2

2 months ago

1.4.1

2 months ago

1.3.2

2 months ago

1.4.0

2 months ago

1.3.1

2 months ago

1.3.0

2 months ago