@meldscience/meld v0.1.0
Oneshot
A set of simple command-line tools for working with AI prompts and code context.
Installation
npm install -g @meldscience/oneshotTools
meld
Process a markdown file containing embedded commands and replace them with their output.
meld input.meld.md [options]Input Format
# My Prompt
Here's what the system looks like:
@cmd[tree src]
And here's the relevant code:
@cmd[cat src/main.ts]Options
-o, --outfile <path>- Output file path (default: input.meld.generated.md)--dry-run- Print commands that would be executed without running them
Examples
Process a prompt file:
meld myprompt.meld.md
# Creates myprompt.meld.generated.mdSpecify output location:
meld myprompt.meld.md -o output.mdoneshot
Send a prompt to one or more AI models with optional variations.
oneshot [model...] prompt.md [options]The tool will send to all specified models in parallel. For example:
oneshot gpt-4 claude-3 claude-2 prompt.mdOptions
-o, --outfile <path>- Save output to file (default: stdout)--system <prompt>- System prompt--system-file <path>- System prompt from file--variations <json>- Variation prompts as JSON array--variations-file <path>- Variation prompts from file (JSON or YAML)--iterations <number>- Number of responses per variation (default: 1)
Examples
Basic usage:
oneshot claude-3 prompt.mdWith system prompt:
oneshot gpt-4 prompt.md --system "You are a helpful programming assistant"With variations from command line:
oneshot claude-3 prompt.md --variations '["Review as architect", "Review as developer"]'With variations from file:
# roles.yaml
architect: "Review this system as a software architect"
developer: "Review this system as a senior developer"
security: "Review this system focusing on security implications"oneshot claude-3 prompt.md --variations-file roles.yamlSave output to file:
oneshot claude-3 prompt.md -o response.mdMultiple iterations:
oneshot claude-3 prompt.md --iterations 3oneshotcat
Combine meld and oneshot functionality - process a prompt script and send it to an AI model.
oneshotcat [model] input.meld.md [options]Accepts all oneshot options plus:
--expanded-outfile <path>- Save processed prompt script to file
Examples
Basic usage:
oneshotcat claude-3 myprompt.meld.mdSave both prompt and response:
oneshotcat claude-3 myprompt.meld.md --expanded-outfile expanded.md -o response.mdOutput Formats
meld Output
Creates a new markdown file with commands replaced by their output:
# My Prompt
Here's what the system looks like:src ├── main.ts ├── lib │ └── utils.ts └── tests └── main.test.ts
And here's the relevant code:
```typescript
import { something } from './lib/utils';
export function main() {
// ...
}### oneshot/oneshotcat Output
Outputs responses in markdown format, perfect for reading or piping to another command:
```markdown
# Architect Perspective
Based on the codebase structure...
# Developer Perspective
Looking at the implementation...
# Security Perspective
From a security standpoint...The markdown format makes it easy to:
1. Read the output directly
2. Save to a file (-o output.md)
3. Pipe into another command for meta-analysis
4. Process with standard Unix text tools
Configuration
Configuration can be provided via:
1. Environment variables
2. .meldrc file (in home directory when installed globally)
3. Command line arguments
Priority: CLI args > .meldrc > env vars
Environment Variables
ANTHROPIC_API_KEY=your_key
OPENAI_API_KEY=your_key
DEFAULT_MODEL=claude-3.meldrc
{
"defaultModel": "claude-3",
"anthropicApiKey": "your_key",
"openaiApiKey": "your_key"
}1Password Integration
You can securely store your API keys in 1Password and reference them in your .meldrc:
{
"defaultModel": "claude-3",
"anthropicApiKey": "op://vault-name/anthropic/api-key",
"openaiApiKey": "op://vault-name/openai/api-key"
}The tool will automatically resolve these references using the 1Password CLI (op) if it's installed.
Configuration Helper
Use the meld-config command to manage your configuration:
# Show current configuration
meld-config --show
# Set API keys
meld-config --anthropic-key <your-key>
meld-config --openai-key <your-key>
# Set default model
meld-config --default-model claude-3Multiple Models with Variations
You can combine multiple models with variations to get a rich set of perspectives:
oneshot gpt-4 claude-3 prompt.md --variations roles.yamlThis will get each role's perspective from each model, with output like:
# GPT-4: Architect Perspective
[response...]
# GPT-4: Developer Perspective
[response...]
# Claude 3: Architect Perspective
[response...]
# Claude 3: Developer Perspective
[response...]Command Chaining
The tools are designed to work well with Unix pipes and support chaining for meta-analysis:
# Get multiple AI perspectives then analyze them together
oneshot claude-3 myprompt.md --variations roles.yaml | oneshot claude-3 secondprompt.mdWhere secondprompt.md might contain something like:
Review these three responses from different perspectives. Identify:
- Common themes
- Points of dissonance
- A recommended path balancing pragmatism, keeping in mind this is for a single developer/server rather than enterprise software.
---
[Previous responses will be inserted here]You can also use immediate prompts for quick chains:
oneshot claude-3 myprompt.md --variations roles.yaml | oneshot claude-3 "analyze these perspectives and recommend a path forward"Module Usage
All tools can also be used programmatically:
import { PromptScript, Oneshot, Oneshotcat } from 'ai-prompt-tools';
// Process a prompt script
const meld = new PromptScript({
inputFile: 'prompt.meld.md',
outputFile: 'output.md'
});
const expanded = await meld.process();
// Send to AI
const oneshot = new Oneshot({
model: 'claude-3',
promptFile: 'prompt.md',
variations: ['perspective 1', 'perspective 2']
});
const responses = await oneshot.process();
// Combined usage
const cat = new Oneshotcat({
model: 'claude-3',
promptFile: 'prompt.meld.md',
variations: ['perspective 1', 'perspective 2']
});
const results = await cat.process();10 months ago