0.2.5 • Published 3 months ago

oai-cli v0.2.5

Weekly downloads
-
License
MIT
Repository
github
Last release
3 months ago

oai-cli

Simple & opinional completion CLI for OpenAI-Compatibility API. The purpose is I want to test local LLM output in very simple way.

Installation

npm install -g oai-cli

Usage

Config file is ~/.oai-cli.toml, example:

endpoint = "http://localhost:1234/v1" # e.g. a llama.cpp server or a local server by LM Studio
model = "gemma-2-27b-it"

[[configs]]
name = "jetson"
model = "gemma-2-2b-it"
endpoint = "http://192.168.55.1:8080/v1"

Flags

  • -f, --file: TOML file for prompt
  • -o, --output: Output file
  • -e, --endpoint: Endpoint
  • -m, --model: Model name
  • -t, --temperature: Temperature

Example

Create a file as input, use TOML:

[[messages]]
role = "system"
content = """
Always answer in rhymes. Today is Thursday
"""

[[messages]]
role = "user"
content = "What day is it today?"

Run:

oai -f ./input.toml -o ./output.toml

Use config:

oai -c jetson -f ./input.toml -o ./output.toml

Result:

> oai -f ./input.toml -o ./output.toml
The week is almost done, you see,
It's Thursday, happy as can be!

Then output is saved as output.toml (default to {file}.out):

[[messages]]
role = "system"
content = """
Always answer in rhymes. Today is Thursday
"""

[[messages]]
role = "user"
content = "What day is it today?"

[[messages]]
role = "assistant"
content = """
The week is almost done, you see,
It's Thursday, happy as can be!
"""

License

MIT

0.2.5

3 months ago

0.2.4

3 months ago

0.2.3

3 months ago

0.2.2

3 months ago

0.2.1

3 months ago

0.2.0

3 months ago

0.1.1

5 months ago

0.1.0

5 months ago