oneshot v1.0.1
🚧 this is under-construction.gif 🚧
Oneshot (pre-release)
A command-line tool for sending prompts to AI models. Supports OpenAI, Anthropic, OpenRouter, and self-hosted models.
Installation
npm install -g oneshot
Features
- Send prompts to AI models (OpenAI, Anthropic, OpenRouter, self-hosted models)
- Support for system prompts and variations
- Configuration via environment variables or config files (
~/.config/oneshot/config.json
) - Model aliases for common AI models
- Save responses to files
- Control reasoning effort for o1/o3 models
- Model Context Protocol (MCP) support for tool usage
- Flexible prompt construction with multiple content types
- Execute commands and analyze their output
- Process meld files for advanced prompt scripting
- AI-assisted file editing with natural language instructions or explicit diffs
Usage
Basic Usage
Send a prompt directly to an AI model:
oneshot "Your prompt here" [options]
Options:
-m, --model <model>
- Which model to use (defaults to claude-3-7-sonnet-latest)-s, --system <prompt>
- System prompt text-e, --effort <level>
- Reasoning effort level (low, med/medium, high)-o, --output <file>
- Write response to file--silent
- Only output model response (or nothing with -o)--verbose
- Show detailed output including commands and prompts--tools
- Enable tool usage for supported models--yolo
- Auto-approve all tool actions without prompting--mcp-server <n>
- Use specific MCP server from config--meld
- Process .md files as meld files
Output Levels
Oneshot provides different output levels to control verbosity:
Silent with Output File (
--silent
with-o
):- No output to terminal
- Response written only to the specified file
Silent without Output File (
--silent
without-o
):- Only the model's response is output to the terminal
- No progress indicator
- No original prompt display
- No run command output
Default (no flags):
- Show progress indicator
- Show model response
- Suppress run command output
- Don't show the sent prompt
Verbose (
--verbose
):- Show run command output
- Show the fully built prompt (including run command output)
- Show progress indicator
- Show model response
- Show detailed debug information
Note: When using prompts with exclamation points (
!
), use single quotes instead of double quotes to avoid shell interpretation issues:# This works correctly: oneshot 'Hello, world!' -m claude-3-7-sonnet-latest # This may cause the shell to hang: oneshot "Hello, world!" -m claude-3-7-sonnet-latest
Content Types
Oneshot supports multiple content types that can be combined in any order. Each content type is formatted with XML tags in the final prompt, making it clear to the model what each part represents, so think of these as sections of your prompt.
-c, --context <content>
- Context content-t, --task <content>
- Task content-i, --instructions <content>
- Instructions content-p, --prompt <content>
- Prompt content-f, --file <path>
- File content-r, --run <command>
- Run command and include output--edit <file_path>
- File to edit (can be used multiple times)--diff <instructions>
- Edit instructions for preceding --edit
File Editing with --edit and --diff
The --edit
flag provides powerful AI-assisted editing capabilities right from the command line. With a simple command, you can have an AI model read your files, understand their content, and apply intelligent changes based on natural language instructions.
# The simplest way to edit a file - just one command!
oneshot --edit myfile.js --diff "Refactor to use async/await instead of promises"
Alternatively you can pass an explicit diff of the changes you want made or even a structured file.
oneshot --edit myfile.js --diff diff.txt
You can also make multiple edits in one line:
oneshot --edit myfile.js --diff diff1.txt --edit file2.js --diff diff2.txt
Right now diffs are grouped with the prior file provided for editing. You can use a single --diff
and multiple --edit
files preceding that and the model will likely infer what you want just fine:
oneshot --edit myfile.js --diff diff1.txt --edit file2.js --edit file3.js --diff diff.txt
This powerful feature enables you to:
- Refactor complex code with a single instruction
- Fix bugs by simply describing what's wrong
- Add features without writing a single line of code yourself
- Standardize patterns across multiple files
- Make systematic changes that would be tedious to do manually
How It Works
When you use the --edit
flag, Oneshot:
1. Enables tool usage automatically - The AI gets access to read and modify files
2. Reads the specified file(s) - The AI analyzes the file's content and structure
3. Applies intelligent changes - Based on your instructions or the main prompt
4. Shows you a preview of changes - You can review before they're applied (unless using --yolo
)
5. Writes the changes back - Only after your approval
Basic Usage Examples
# Edit a single file with general instructions
oneshot "Fix error handling in this file" --edit path/to/file.js
# Edit a file with specific diff instructions - this is the most direct approach
oneshot --edit path/to/file.js --diff "Add input validation to the login function"
# Edit multiple files at once
oneshot "Standardize error handling" --edit file1.js --edit file2.js
# Edit multiple files with specific instructions for each
oneshot "Improve code quality" --edit file1.js --diff "Add better error handling" --edit file2.js --diff "Fix input validation"
# Combine with other content types for more context
oneshot --task "Update deprecated API calls" --context "We're using Node.js 18" --edit src/api.js
Advanced Usage and Diff Instructions
The --diff
parameter accepts three types of input:
Natural language instructions: The most intuitive way, just describe what changes you want
oneshot --edit app.js --diff "Convert this to TypeScript with proper type definitions"
File path containing instructions: Useful for more complex changes
# where instructions.txt contains detailed editing instructions oneshot --edit app.js --diff instructions.txt
Traditional unified diff format: For precise, line-by-line changes
oneshot --edit app.js --diff "--- login function +++ login function with validation @@ Change the login function to validate email format"
Real-World Examples
Here are some practical examples of what you can do with the edit feature:
# Modernize legacy code
oneshot --edit legacy.js --diff "Refactor to use modern JavaScript features like arrow functions, template literals, and destructuring"
# Add JSDoc comments
oneshot --edit utils.js --diff "Add comprehensive JSDoc comments to all functions"
# Fix accessibility issues
oneshot --edit component.jsx --diff "Fix all accessibility issues in this React component and add proper ARIA attributes"
# Update API implementation
oneshot --edit api.js --diff "Update this API client to use the new v2 endpoints as described in https://api.example.com/docs"
# Implement a feature
oneshot --edit app.js --diff "Add a dark mode toggle feature that persists user preference in localStorage"
# Optimize performance
oneshot --edit heavyProcess.js --diff "Optimize this function for better performance by reducing complexity and avoiding unnecessary calculations"
You can use --no-tools
to disable tool usage even when using --edit
, or use --yolo
to auto-approve all tool actions without prompting for confirmation.
File Editing Command
In addition to the --edit
flag, Oneshot also provides a dedicated edit
command for a more focused file editing experience:
oneshot edit <file_path> [options]
Options:
-d, --diff <instructions>
- Explicit edit instructions or diff to apply-m, --model <model>
- Model to use (defaults to your default model)-s, --system <prompt>
- System prompt for the edit--output <file>
- Save edits to a new file instead of overwriting--tools
- Enable tool usage (enabled by default for edit command)--no-tools
- Disable tool usage--yolo
- Auto-approve all tool actions without prompting--mcp-server <n>
- Use specific MCP server from config
Examples:
# Edit a file with default instructions (AI decides what to improve)
oneshot edit src/app.js
# Edit with specific instructions
oneshot edit src/app.js -d "Add input validation to the login function"
# Edit with a structured diff format
oneshot edit src/app.js -d "--- login function
+++ login function with validation
@@ Change the login function to validate email format and password length"
# Edit with a specific model and save to a new file
oneshot edit src/app.js -m claude-3-opus-latest --output src/app.improved.js
# Edit with a custom system prompt
oneshot edit src/app.js -s "You are a security expert. Find and fix any security issues."
Using Meld Files
Oneshot has built-in support for meld, a prompt scripting language that provides powerful features for creating AI prompts:
- Files with extensions
.mld
,.mld.md
are automatically processed with meld. - Files with extension
.md
can be processed with meld by adding the--meld
flag. - Output files follow the naming convention
filename-reply.md
(with numeric suffixes for duplicates).
Example:
# Process a meld file and send to AI
oneshot prompt.meld -m claude-3-opus-latest
# Process a markdown file with meld
oneshot prompt.md --meld -m o1-mini
Meld allows you to use variables, directives, and scripting in your prompts:
@text greeting = "Hello"
@text name = "World"
${greeting}, ${name}!
@run [ls -la]
See the meld documentation for more details on writing meld files.
Examples
# Send a simple prompt
oneshot "What is the capital of France?"
# Use a file as input
oneshot myfile.md
# Combine multiple content types
oneshot --context "I'm a developer" --task "Explain this code" --file code.js
# Run a command and analyze its output
oneshot --task "Summarize the failing tests" --run "npm test"
# Run multiple commands in sequence
oneshot --run "ls -la" --run "git status" --task "Explain what's going on in this repository"
# Enable tool usage with a specific MCP server
oneshot "Show me the git status" --tools --mcp-server git
# Send prompt and save response to file
oneshot "What is the capital of France?" -o response.txt
# Edit a file with AI assistance
oneshot "Update error handling" --edit src/app.js
# Edit multiple files with specific instructions
oneshot "Standardize API error responses" --edit src/api.js --diff "Add consistent error codes" --edit src/client.js --diff "Update error handling"
Configuration
You can configure your API keys and MCP servers in several ways:
- Environment variables:
export ANTHROPIC_API_KEY=<your-key> # For Claude models
export OPENAI_API_KEY=<your-key> # For GPT models
export OPENROUTER_API_KEY=<your-key> # For models via OpenRouter
- Config file in
~/.config/oneshot/config.json
:
{
"anthropicApiKey": "your-key",
"openaiApiKey": "your-key",
"openrouterApiKey": "your-key",
"modelAliases": {
"claude": "claude-3-7-sonnet-latest",
"sonnet": "claude-3-7-sonnet-latest",
"opus": "claude-3-opus-latest",
"haiku": "claude-3-5-haiku-latest",
"4o": "chatgpt-4o-latest",
"o1": "o1",
"o1-mini": "o1-mini"
},
"mcp": {
"servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/data"]
},
"git": {
"command": "uvx",
"args": ["mcp-server-git", "--repository", "./"]
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
},
"defaultServer": "filesystem",
"toolsAllowed": ["*"],
"requireToolConfirmation": true
}
}
Self-Hosted Models Configuration
You can configure self-hosted models to use local or self-hosted LLM servers like Ollama, LM Studio, or any other compatible API server.
Basic Configuration
Add self-hosted models to your config file:
{
"selfHosted": {
"local-llama": {
"url": "http://localhost:8000/v1",
"provider": "openai",
"headers": {
"Custom-Header": "value"
}
},
"local-claude": {
"url": "http://localhost:8001/v1",
"provider": "anthropic",
"authToken": "local-token"
}
}
}
Each self-hosted model configuration requires:
url
: The endpoint URL for your modelprovider
: The API format to use ('openai', 'anthropic', etc.)- Optional
headers
: Custom headers to include with requests - Optional
authToken
: Authentication token if required
Ollama Configuration
To use models from Ollama:
{
"selfHosted": {
"ollama-llama3": {
"url": "http://localhost:11434/v1",
"provider": "openai"
},
"ollama-mistral": {
"url": "http://localhost:11434/v1",
"provider": "openai"
}
},
"modelAliases": {
"llama3": "ollama-llama3",
"mistral": "ollama-mistral"
}
}
Use Ollama models with:
# Start Ollama server first
ollama serve
# Use the model with oneshot
oneshot -m ollama-llama3 "Your prompt here"
# Or use the alias
oneshot -m llama3 "Your prompt here"
LM Studio Configuration
To use models from LM Studio:
{
"selfHosted": {
"lmstudio": {
"url": "http://localhost:1234/v1",
"provider": "openai"
}
},
"modelAliases": {
"local": "lmstudio"
}
}
Use LM Studio models with:
# Start the LM Studio server first and ensure API server is enabled
# Then use the model with oneshot
oneshot -m local "Your prompt here"
# Or use the alias
oneshot -m local "Your prompt here"
Secret References
You can use 1Password CLI secret references in your config:
{
"anthropicApiKey": "op://vault-name/anthropic/api-key",
"openaiApiKey": "op://vault-name/openai/api-key",
"openrouterApiKey": "op://vault-name/openrouter/api-key",
}
}
Default Model Aliases
The following model aliases are available by default:
claude
→ claude-3-7-sonnet-latestsonnet
→ claude-3-7-sonnet-latestopus
→ claude-3-opus-latesthaiku
→ claude-3-5-haiku-latest4o
→ chatgpt-4o-latesto1
→ o1o1-mini
→ o1-mini
You can override these or add your own in your configuration file.
You can also use OpenRouter (maybe -- this is untested) models with the following formats:
- Provider-specific format:
openai/gpt-4-turbo
,anthropic/claude-3-opus
,meta/llama-3-70b
- Colon-separated format:
openai:gpt-4-turbo
,anthropic:claude-3-opus
Managing Configuration
View and Set Basic Configuration
View current configuration:
oneshot config
Set API keys:
oneshot config --anthropic <key>
oneshot config --openai <key>
Set model aliases:
oneshot config --alias claude=claude-3-latest gpt=chatgpt-4o-latest
Set default model:
oneshot default claude-3-7-sonnet-latest
# or
oneshot --default claude-3-7-sonnet-latest
Configuring Self-Hosted Models (untested wip)
Self-hosted models are configured in your config file (~/.config/oneshot/config.json
):
{
"selfHosted": {
"local-llama": {
"url": "http://localhost:8000/v1",
"provider": "openai"
},
"my-local-model": {
"url": "http://localhost:8001/v1",
"provider": "anthropic",
"authToken": "mytoken"
},
"custom-model": {
"url": "http://api.example.com/v1",
"provider": "openai",
"headers": {
"X-Custom": "value"
}
}
}
}
After configuring your self-hosted models, create aliases for easier use:
# Create a model alias
oneshot config --alias llama=local-llama
Examples for Popular Self-Hosted Systems
Using Ollama
For Ollama models:
{
"selfHosted": {
"ollama-llama3": {
"url": "http://localhost:11434/v1",
"provider": "openai"
},
"ollama-mistral": {
"url": "http://localhost:11434/v1",
"provider": "openai"
}
},
"modelAliases": {
"llama3": "ollama-llama3",
"mistral": "ollama-mistral"
}
}
# Start Ollama server first
ollama serve
# Use the models
oneshot -m llama3 "Your prompt here"
oneshot -m mistral "Your prompt here"
Using LM Studio
For LM Studio models:
{
"selfHosted": {
"lmstudio": {
"url": "http://localhost:1234/v1",
"provider": "openai"
}
},
"modelAliases": {
"local": "lmstudio"
}
}
# Start LM Studio and enable the API server
# Then use your model
oneshot -m local "Your prompt here"
# Or use the alias
oneshot -m local "Your prompt here"
Managing MCP Servers
Oneshot provides comprehensive commands for managing MCP servers, similar to Claude Code's approach:
Add a new MCP server:
# Add a server that needs to be launched
oneshot mcp add filesystem --env API_KEY=secret -- npx @modelcontextprotocol/server-filesystem ./data
# Add a URL-based server connection
oneshot mcp add myserver --url http://localhost:8080 --token mytoken
List all configured servers:
oneshot mcp list
Get details for a specific server:
oneshot mcp get filesystem
Remove a server:
oneshot mcp remove filesystem
Set the default server:
oneshot mcp default filesystem
Import servers from Claude desktop configuration:
oneshot mcp import
You can control whether the configuration is stored globally or locally:
# Add to global config (~/.config/oneshot/config.json)
oneshot mcp add myserver --scope global -- npx my-server
# Add to local config (./.config/oneshot/config.json in current directory)
oneshot mcp add myserver --scope local -- npx my-server
Development
Testing
Oneshot uses Vitest for testing. The project has two test commands:
npm test
ornpm run test:oneshot
: Runs only the ESM-compatible tests intests/oneshot/
directorynpm run test:all
: Runs all tests, including tests that are still being migrated to ESM
Testing Guidelines
When writing tests, especially when mocking dependencies, follow these guidelines:
- Place mocks before importing the modules they mock
- Create explicit mock functions rather than mocking entire modules when possible
- For complex Commander-based CLI tests, test the core implementation logic directly rather than the Commander structure
- Use the
vi.mock()
method with careful attention to hoisting behavior - Add
.js
extensions to all imports, including in test files
For CLI command testing, we recommend two main approaches:
- Direct Implementation Testing: Extract the core business logic from Commander handlers and test it directly
- Mocking Approach: Focus on mocking only the essential dependencies (fs, etc.) rather than the entire CLI framework
A simplified testing approach for CLI commands:
// Mock dependencies
const mockFs = {
existsSync: vi.fn(),
readFileSync: vi.fn(),
writeFileSync: vi.fn()
};
vi.mock('fs', () => mockFs);
// Test the command implementation logic directly
const result = myCommandFunction('arg1', 'arg2');
// Assert expected behavior
expect(mockFs.writeFileSync).toHaveBeenCalled();
License
MIT
3 months ago
4 months ago
5 months ago
5 months ago
6 months ago
5 months ago
4 months ago
4 months ago
5 months ago
5 months ago
5 months ago
5 months ago
5 months ago
5 months ago
5 months ago
4 months ago
5 months ago
5 months ago
13 years ago