openai-code v1.0.13
OpenAI Code
An unofficial proxy layer that lets you use Anthropic Claude Code with any OpenAI API backend.
This repository provides a proxy server that allows Claude Code to work with OpenAI models instead of Anthropic's Claude models. The proxy translates requests in the Anthropic API format to OpenAI API calls and converts the responses back to Anthropic's format.
Features & Performance (TL;DR)
Smarter, Faster and Cheaper than Claude Code.
- 100% working solution, even a little smarter
- ~2-3x faster due to better OpenAI performance and fully reworked prompts from my side
- ~2x cheaper due to lower token price (would be even more, but limited by extra tools prompt; OpenAI: Please work on this issue)
Maintains full compatibility with the Claude Code CLI (last tested version of Claude code:
0.2.32
)
Technically:
- Proxies Anthropic API requests to the OpenAI API (streaming, non-streaming)
- Handles tool/function call translations, as well as Task dispatching
- Converts between Anthropic and OpenAI message formats
Prerequisites
- Node.js (v16 or later)
- An OpenAI API key
- Access to
o3-mini
andgpt-4o-mini
- Claude Code CLI globally installed (
npm install -g @anthropic-ai/claude-code@0.2.32
)
Usage: OpenAI Code Proxy
No specific setup needed, just run: npx openai-code@1.0.13
. It will download this repo's code and execute it, binding on port 6543
.
Noob Warning: If
npx
is not found, you need to install Node.js first.
Customization
Whenever openai-code
receives a request, it analyzes the system prompt prepared by Claude Code. It will find out the working directory and read the .env
and CLAUDE_RULES.md
from the requesting project's working directory. This way, OpenAI Code can offer awesome, project/request-based customizations!
OpenAI endpoint, HTTP Proxy, AI Model names
To customize, create a .env
file in the project directory you want OpenAI Code to work with, and set following variables:
OPENAI_CODE_API_KEY="your-openai-api-key"
# Remove the comment sign in front when you want to activate this:
#OPENAI_CODE_BASE_URL="https://api.openai.com/v1"
#PROXY_URL="http://your-proxy-server:port"
#TOOLS_MODEL="gpt-4o-mini"
#REASONING_MODEL="o3-mini"
The PROXY_URL
is important if you need to route API requests through a proxy, which is often necessary for accessing the OpenAI API from certain regions or networks.
Using OPENAI_CODE_BASE_URL
you can set e.g. a custom OpenAI / Azure deployment as the target.
TOOLS_MODEL
is used for tool selection (o-series models showed issues with that) defaults to gpt-4o-mini
(gpt-4o
) is tested as well.
REASONING_MODEL
is used for reasoning and defaults to o3-mini
.
Custom OpenAI Code Proxy Port
You might want to set the environment variable (NOT in the project directory but globally) OPENAI_CODE_PORT
to set a different port than 6543
to start on.
You can also prepend this imperatively: OPENAI_CODE_PORT=7654 npx openai-code@1.0.13
Custom System Prompts
This customization allows for complete developer freedom. Got a specific toolchain or code style to follow? Put your instructions there!
Create a CLAUDE_RULES.md
in your project directory. OpenAI Code is smart and will include all instructions written there in its system prompts.
Usage: Claude Code (Proxy configuration)
You can simply start claude
like this and it will use OpenAI Code as a proxy:
DISABLE_PROMPT_CACHING=1 ANTHROPIC_AUTH_TOKEN="test" ANTHROPIC_BASE_URL="http://127.0.0.1:6543" API_TIMEOUT_MS=600000 claude
Note: As you can see, this is not very convenient. Linux/macOS/Windows (WSL) allows you to define individual shell command aliases:
Add the following shell function to your shell configuration file:
- For bash:
~/.bashrc
or~/.bash_profile
- For zsh:
~/.zshrc
claude-openai() {
DISABLE_PROMPT_CACHING=1 \
ANTHROPIC_AUTH_TOKEN="test" \
ANTHROPIC_BASE_URL="http://127.0.0.1:6543" \
API_TIMEOUT_MS=600000 \
claude "$@"
}
# only for bash
export -f claude-openai
Important: Now you can start
claude-openai
and it will automatically use theopenai-code
proxy. But you need to restart your shell entirely in order to have the new alias being registered (...orsource
the config).
How It Works
- The proxy server listens for requests at the
/v1/messages
endpoint, which follows Anthropic's API format - It transforms incoming Anthropic-formatted requests into OpenAI API format
- It sends the transformed request to the OpenAI API
- It converts the OpenAI response back to Anthropic's response format
- For streaming responses, it implements server-sent events (SSE) compatible with the Claude Code CLI's expectations
Differences to Claude Code and Individual Prompts
This project comes with reworked system and tool prompts, crafted by an industry expert who chooses to remain anonymous due to this peculiar legal stance of Anthropic:
It is important to note that this project does not infringe upon any EU or US laws, nor does it violate the DMCA, as it does not utilize any Anthropic prompts or code.
Each of my prompts has been meticulously designed by me, an experienced AI engineer.
Rant: This project is developed as free software, single-handedly, in just a few hours. Meanwhile, companies like OpenAI and Anthropic employ entire development teams with million-dollar budgets to achieve similar outcomes. Let's eat the rich, or so they say ;)
Differences to Claude Code: Speed, Quality and Cost
OpenAI Code makes use of all features Claude Code offers. It successfully manages to make Claude Code behave in the very same way it did with Sonnet/Haiku-series models, but using OpenAI models. It also makes correct use of tools, including Tasks (dispatch_agent).
Functional differences only occur when the reasoning model in use differs in behaviour or because my prompts instruct the model differently. For example, I explicitly PROHIBIT the reading of any .env
files. It's not perfect, but better than not doing anything about it...
I basically threw all Anthropic prompts into the bin because they were... $(insert slur here). When a system prompt arrives, I just extract the context
and map the rest to my prompts.
Unlike Anthropics bloated approach, I take a minimalist approach instead. You can find all reworked prompts in prompts.mjs
(see the Code tab on npm.com; and no, I won't host a repo and doxx me. You can thank Anthropic for hindering free software to thrive).
By reducing the number of tokens used to an absolute minimum, I not only decrease the cost but also significantly enhance the speed of all operations.
Rant 2: One might praise the competency of Anthropic's business department. Well, let's just say that the verbosity in Anthropic's original system prompts result in tremendous waste of tokens, increased cost and decreased speed.
Anthropic's original prompts also point to wrong tool names in their own prompts... thanks to your behaviour Anthropic, I leave it to you to find out what I mean by this. Have fun!
My streamlined approach ensures that a typical refactoring task, including writing tests and documentation, can be completed in a few seconds for ~2 Cents.
Rant 3: Feel free to go ahead and copy my prompts, Anthropic team. This is WTFPL licensed.
Feedback, Bugs, Business Inquiries, Hate, Slurs, Legal Attacks, and Love Letters
Got a bug to squash, a compliment to shower, or a legal threat to send my way? Maybe you just want to share your favorite cat meme or DOGE your whole team and hire me instead? Whatever floats your boat, I'm all ears (or eyes, technically).
Drop me a line at: openai-code-npm@proton.me
Automatic Model Selection
This project automatically selects the appropriate model and reasoning strength for prompt execution.
According to my research:
The
o3-mini
is the optimal OpenAI reasoning model right now (obviously). This is the base model for all reasoning. The reasoning strength, however, is selected according to the actual demands. Whenever an error occurs, the reasoning strength is increased. This technique aims at striking a balance between speed, cost and quality.Deficiencies in OpenAI's o-series models selecting tools appropriately, therefore after reasoning, the tool selection is done using a small, dedicated tool-selection prompt, processed by
gpt-4o-mini
.Sometimes,
o3-mini
tends to give the user instructions on doing a task instead of working through the task itself. In this case, just tell the model: "No, you do that". Automating this resulted in hard to manage endless loops. I continue to investigate this. If you've got a profound and evidence-driven solution (you debugged the code, and found a great enhancement), suggest it to me via email please.
More Developer Notes
Do you plan to contribute and email me suggestions? Do you plan to review my code and check if I might keylog every keyboard entry or send all your secret credentials to my evil server?
Here's an outline of this project's codebase for your to start:
- Code Structure: The main server logic is contained in
index.mjs
(right, I gave a F on architecture for a few-hundred lines codebase). All prompts are located inprompts.mjs
andutils.mjs
contains a few augmenting functions. Could I have written this in TypeScript or use a nicer architecture? Yes, but why the additional complexity... - Third Party Dependencies: Express is used for the HTTP server low-level implementation to handle API requests and responses and SSE. OpenAI's official library is used for calling OpenAI API's.
https-proxy-agent
is used for then aPROXY_URL
is set (useful for enterprise environments or when behind a "great" firewall). - Error Handling, Logging: Errors are logged using the
logToFile
function, and server errors are handled gracefully with appropriate HTTP responses. All default logging happens in the console (stdout
). - Configuration Initialization: The server initializes a configuration file (
.claude.json
) in your home user directory, if it doesn't exist yet, setting default Claude Code values for user settings.
Original Author's Verification Key
I'll leave this here, shall I ever want or need to verify that I'm the original author of this codebase.
AAAAC3NzaC1lZDI1NTE5AAAAIMpneofHS0ciT1pVEgZhbqqzbmUgPz0z/VjU91daL5uB