1.0.4 • Published 7 months ago

varai v1.0.4

Weekly downloads
-
License
MIT
Repository
github
Last release
7 months ago

tak-bro license version downloads


Introduction

VarAI is a cutting-edge command-line tool that leverages diverse AI techniques to provide tailored recommendations for variable names in your projects. Unlock the power of AI to streamline your coding process and effortlessly craft meaningful variable names, tailored to your needs.

Supported Providers

Remote

Local

Setup

The minimum supported version of Node.js is the v18. Check your Node.js version with node --version.

  1. Install VarAI:
npm install -g varai
  1. Retrieve and set API keys or Cookie you intend to use:

It is not necessary to set all keys. But at least one key must be set up.

You may need to create an account and set up billing.

varai config set OPENAI_KEY=<your key>
varai config set ANTHROPIC_KEY=<your key>
varai config set GEMINI_KEY=<your key>
varai config set MISTRAL_KEY=<your key>
# Please be cautious of Escape characters(\", \') in browser cookie string 
varai config set HUGGING_COOKIE="<your browser cookie>"
# Please be cautious of Escape characters(\", \') in browser cookie string 
varai config set CLOVAX_COOKIE="<your browser cookie>"

This will create a .varai file in your home directory.

  1. Run VarAI with your staged in git repository:
varai -m "this class is for generating variable names"

Using Locally

You can also use your model for free with Ollama and is available to use both Ollama and remote providers simultaneously.

  1. Install Ollama from https://ollama.com

  2. Start it with your model

ollama run llama2 # model you want use. ex) llama3, codellama
  1. Set the model and host
varai config set OLLAMA_MODEL=<your model> 
varai config set OLLAMA_HOST=<host> # Optional. The default host for ollama is http://localhost:11434.
varai config set OLLAMA_TIMEOUT=<timout> # Optional. default is 100000ms (100s)

If you want to use ollama, you must set OLLAMA_MODEL.

  1. Run VarAI with your staged in git repository
varai -m "this class is managing session"

Usage

CLI mode

You can call varai directly to generate a variable name for your message:

varai -m "Please generate some class names for managing session"

CLI Options

--message or -m
  • Message that represents the user's request or instruction to your AI (Required)
varai --message "Please generate some class names for managing session"  # or -m <s>
--language or -l
  • Code Language to use for the generated variables
varai --language <s> # or -l <s>
--generate or -g
  • Number of messages to generate (Warning: generating multiple costs more) (default: 3)
varai --generate <i> # or -g <i>

Warning: this uses more tokens, meaning it costs more.

--prompt or -p
  • Additional prompt to let users fine-tune provided prompt
varai --prompt <s> # or -p <s>
--clipboard or -c
  • Copy the selected variable name to the clipboard (default: true)
varai --clipboard=false # or -c=false

Configuration

Reading a configuration value

To retrieve a configuration option, use the command:

varai config get <key>

For example, to retrieve the API key, you can use:

varai config get OPENAI_KEY

You can also retrieve multiple configuration options at once by separating them with spaces:

varai config get OPENAI_KEY OPENAI_MODEL GEMINI_KEY 

Setting a configuration value

To set a configuration option, use the command:

varai config set <key>=<value>

For example, to set the API key, you can use:

varai config set OPENAI_KEY=<your-api-key>

You can also set multiple configuration options at once by separating them with spaces, like

varai config set OPENAI_KEY=<your-api-key> generate=3 language=C++

Options

OptionDefaultDescription
OPENAI_KEYN/AThe OpenAI API key
OPENAI_MODELgpt-3.5-turboThe OpenAI Model to use
OPENAI_URLhttps://api.openai.comThe OpenAI URL
OPENAI_PATH/v1/chat/completionsThe OpenAI request pathname
ANTHROPIC_KEYN/AThe Anthropic API key
ANTHROPIC_MODELclaude-3-haiku-20240307The Anthropic Model to use
GEMINI_KEYN/AThe Gemini API key
GEMINI_MODELgemini-proThe Gemini Model
MISTRAL_KEYN/AThe Mistral API key
MISTRAL_MODELmistral-tinyThe Mistral Model to use
HUGGING_COOKIEN/AThe HuggingFace Cookie string
HUGGING_MODELmistralai/Mixtral-8x7B-Instruct-v0.1The HuggingFace Model to use
CLOVAX_COOKIEN/AThe Clova X Cookie string
OLLAMA_MODELN/AThe Ollama Model. It should be downloaded your local
OLLAMA_HOSThttp://localhost:11434The Ollama Host
OLLAMA_TIMEOUT100000 msRequest timeout for the Ollama
OLLAMA_STREAMN/AWhether to make stream requests (experimental feature)
languageN/ACode Language to use for the generated variables
generate3Number of variable names to generate
proxyN/ASet a HTTP/HTTPS proxy to use for requests(only OpenAI)
timeout10000 msNetwork request timeout
max-length15Maximum character length of the generated variable names
max-tokens200The maximum number of tokens that the AI models can generate (for Open AI, Anthropic, Gemini, Mistral)
temperature0.7The temperature (0.0-2.0) is used to control the randomness of the output (for Open AI, Anthropic, Gemini, Mistral)
promptN/AAdditional prompt to let users fine-tune provided prompt

Currently, options are set universally. However, there are plans to develop the ability to set individual options in the future.

Available Options by Model

languagegenerateproxytimeoutmax-lengthmax-tokenstemperatureprompt
OpenAI
Anthropic Claude
Gemini
Mistral AI
Huggingface
Clova X
Ollama(OLLAMA_TIMEOUT)

Common Options

language

Code Language to use for the generated variables. (ex. typescript, c++)

generate

Default: 3

The number of variable names to generate to pick from.

Note, this will use more tokens as it generates more results.

proxy

Set a HTTP/HTTPS proxy to use for requests.

To clear the proxy option, you can use the command (note the empty value after the equals sign):

Only supported within the OpenAI

varai config set proxy=
timeout

The timeout for network requests to the OpenAI API in milliseconds.

Default: 10000 (10 seconds)

varai config set timeout=20000 # 20s
max-length

The maximum character length of the generated variable names.

Default: 15

varai config set max-length=30
max-tokens

The maximum number of tokens that the AI models can generate.

Default: 200

varai config set max-tokens=1000
temperature

The temperature (0.0-2.0) is used to control the randomness of the output

Default: 0.7

varai config set temperature=0
prompt

Additional prompt to let users fine-tune provided prompt. Users provide extra instructions to AI and can guide how variable names should look like.

varai config set prompt="Do not mention config changes"

OPEN AI

OPENAI_KEY

The OpenAI API key. You can retrieve it from OpenAI API Keys page.

OPENAI_MODEL

Default: gpt-3.5-turbo

The Chat Completions (/v1/chat/completions) model to use. Consult the list of models available in the OpenAI Documentation.

Tip: If you have access, try upgrading to gpt-4 for next-level code analysis. It can handle double the input size, but comes at a higher cost. Check out OpenAI's website to learn more.

varai config set OPENAI_MODEL=gpt-4

OPENAI_URL

Default: https://api.openai.com

The OpenAI URL. Both https and http protocols supported. It allows to run local OpenAI-compatible server.

OPENAI_PATH

Default: /v1/chat/completions

The OpenAI Path.

Anthropic Claude

ANTHROPIC_KEY

The Anthropic API key. To get started with Anthropic Claude, request access to their API at anthropic.com/earlyaccess.

ANTHROPIC_MODEL

Default: claude-3-haiku-20240307

Supported:

  • claude-3-haiku-20240307
  • claude-3-sonnet-20240229
  • claude-3-opus-20240229
  • claude-2.1
  • claude-2.0
  • claude-instant-1.2
varai config set ANTHROPIC_MODEL=claude-instant-1.2

GEMINI

GEMINI_KEY

The Gemini API key. If you don't have one, create a key in Google AI Studio.

GEMINI_MODEL

Default: gemini-pro

Supported:

  • gemini-pro

Currently supporting only one model, but as Gemini starts supporting other models, it will be updated.

MISTRAL

MISTRAL_KEY

The Mistral API key. If you don't have one, please sign up and subscribe in Mistral Console.

MISTRAL_MODEL

Default: mistral-tiny

Supported:

  • open-mistral-7b
  • mistral-tiny-2312
  • mistral-tiny
  • open-mixtral-8x7b
  • mistral-small-2312
  • mistral-small
  • mistral-small-2402
  • mistral-small-latest
  • mistral-medium-latest
  • mistral-medium-2312
  • mistral-medium
  • mistral-large-latest
  • mistral-large-2402
  • mistral-embed

The models mentioned above are subject to change.

HuggingFace Chat

HUGGING_COOKIE

The Huggingface Chat Cookie. Please check how to get cookie

HUGGING_MODEL

Default: mistralai/Mixtral-8x7B-Instruct-v0.1

Supported:

  • CohereForAI/c4ai-command-r-plus
  • meta-llama/Meta-Llama-3-70B-Instruct
  • HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
  • mistralai/Mixtral-8x7B-Instruct-v0.1
  • NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
  • google/gemma-1.1-7b-it
  • mistralai/Mistral-7B-Instruct-v0.2
  • microsoft/Phi-3-mini-4k-instruct

The models mentioned above are subject to change.

Clova X

CLOVAX_COOKIE

The Clova X Cookie. Please check how to get cookie

Ollama

OLLAMA_MODEL

The Ollama Model. Please see a list of models available

OLLAMA_HOST

Default: http://localhost:11434

The Ollama host

OLLAMA_TIMEOUT

Default: 100000 (100 seconds)

Request timeout for the Ollama. Default OLLAMA_TIMEOUT is 100 seconds because it can take a long time to run locally.

OLLAMA_STREAM

Default: false

Determines whether the application will make stream requests to Ollama. This feature is experimental and may not be fully stable.

Upgrading

Check the installed version with:

varai --version

If it's not the latest version, run:

npm update -g varai

How to get Cookie(Unofficial API)

  • Login to the site you want
  • You can get cookie from the browser's developer tools network tab
  • See for any requests check out the Cookie, Copy whole value
  • Check below image for the format of cookie

When setting cookies with long string values, ensure to escape characters like ", ', and others properly.

  • For double quotes ("), use \"
  • For single quotes ('), use \'

how-to-get-cookie

how-to-get-clova-x-cookie

Disclaimer

This project utilizes certain functionalities or data from external APIs, but it is important to note that it is not officially affiliated with or endorsed by the providers of those APIs. The use of external APIs is at the sole discretion and risk of the user.

Risk Acknowledgment

Users are responsible for understanding and abiding by the terms of use, rate limits, and policies set forth by the respective API providers. The project maintainers cannot be held responsible for any misuse, downtime, or issues arising from the use of the external APIs.

It is recommended that users thoroughly review the API documentation and adhere to best practices to ensure a positive and compliant experience.

Please Star ⭐️

If this project has been helpful to you, I would greatly appreciate it if you could click the Star⭐️ button on this repository!

Maintainers

Contributing

If you want to help fix a bug or implement a feature in Issues, checkout the Contribution Guide to learn how to setup and test the project.