@sudomcp/agent v1.0.18
Sudobase Client WIP
Setup
Create .env
file with OPENAI_API_KEY=xxxxx
, then
# In the root folder
yarn
yarn workspaces run build
Run a local backend server (follow instructions in mcppro
) because authentication against deployed backend is WIP.
Usage:
To enter a chat with no initial prompt and default system prompt:
node dist/main.js
Optional arguments are prompt
(first User message) and systemprompt
node dist/main.js --prompt 'Who is the new pope?' --sysprompt 'You are extremely polite.'
Features:
Conversation:
CLI-mode is a conversation between user and LLM.
Tool selection:
We now support MCP tool calls. Currently servers are enabled by editing the mcpServerUrls.json
file, but this will be improved soon.
Model selection:
The CLI uses the default model (gpt-4o-mini
) but uncomment the agent.chooseModel
line to switch to gpt-4.1-2025-04-14
. Right now we can use any OpenAI model that supports tool calling.
Supporting inference providers like Together.ai is TODO.
Callbacks
The CLI uses an onMessage
callback to display the Agent's messages and an onToolCall
callback to request authorization for tool calls.
Development Notes
Architecture
Frontend talks to
- Agent (for conversation, ChatCompletion)
- McpServerManager (to enable, disable tools that have been added)
- SudoMcpServerManager (to access catalog of SudoMCP servers, add to McpServerManager)
SudoMcpServerManager:
- track list of available mcp servers (via sdk/ApiClient)
- get the list of tools as required by UI (via sdk/ApiClient)
- add tools to McpServerManager
McpServerManager:
- manager (mcpServer, tool)
- enabling / disabling
- list of enabled / available tools per mcp server
- exposes tools to Agent