@mcpflow.io/mcp-mcp-reasoner v1.0.1
MCP Reasoner
此包由 MCPFlow 打包并发布到npm仓库。
为Claude Desktop 实现的基于系统推理的MCP服务器,采用波束搜索和思维评估。
安装与使用
直接使用npx运行:
npx @mcpflow.io/mcp-mcp-reasoner或者先安装后使用:
# 安装
npm install @mcpflow.io/mcp-mcp-reasoner
# 使用
npx @mcpflow.io/mcp-mcp-reasoner使用方法
Installation
git clone https://github.com/frgmt0/mcp-reasoner.git
OR clone the original:
git clone https://github.com/Jacck/mcp-reasoner.git
cd mcp-reasoner
npm install
npm run build工具函数
processInput
处理输入并确保正确的类型
参数:
input: 输入数据
registerTheTool
注册工具
参数:
handleRequests
处理请求
参数:
request: 请求对象
processThought
使用选定的策略处理思想
参数:
request: 处理请求
getStats
获取推理统计信息
参数:
getStrategyMetrics
获取策略度量
参数:
getCurrentStrategyName
获取当前策略名称
参数:
getBestPath
获取最佳路径
参数:
clear
清除状态
参数:
setStrategy
设置策略
参数:
beamWidth: 束宽strategyType: 策略类型numSimulations: 模拟次数
getAvailableStrategies
获取可用策略
参数:
processThought
处理思想
参数:
request: 请求
getBestPath
获取最佳路径
参数:
clear
清除
参数:
getMetrics
获取度量
参数:
clear
清除
参数:
processThought
处理思想
参数:
request: 请求
getBestPath
获取最佳路径
参数:
clear
清除
参数:
getMetrics
获取度量
参数:
clear
清除
参数:
processThought
处理思想
参数:
request: 请求
getBestPath
获取最佳路径
参数:
clear
清除
参数:
getMetrics
获取度量
参数:
clear
清除
参数:
原始信息
- 开发者: Jacck
- 版本: 1.0.0
- 许可证: MIT License
- 原始仓库: Jacck/mcp-reasoner
原始README
MCP Reasoner
A reasoning implementation for Claude Desktop that lets you use both Beam Search and Monte Carlo Tree Search (MCTS). tbh this started as a way to see if we could make Claude even better at complex problem-solving... turns out we definitely can.
Current Version:
v2.0.0
What's New:
Added 2 Experimental Reasoning Algorithms:
- `mcts-002-alpha` - Uses the A* Search Method along with an early *alpha* implementation of a Policy Simulation Layer - Also includes an early *alpha* implementation of Adaptive Exploration Simulator & Outcome Based Reasoning Simulator *NOTE* the implementation of these alpha simulators is not complete and is subject to change - `mcts-002alt-alpha` - Uses the Bidirectional Search Method along with an early *alpha* implementation of a Policy Simulation Layer - Also includes an early *alpha* implementation of Adaptive Exploration Simulator & Outcome Based Reasoning Simulator *NOTE* the implementation of these alpha simulators is not complete and is subject to change
What happened to mcts-001-alpha and mcts-001alt-alpha?
Quite simply: It was useless and near similar to the base
mctsmethod. After initial testing the results yielded in basic thought processes was near similar showing that simply adding policy simulation may not have an effect.
So why add Polciy Simulation Layer now?
Well i think its important to incorporate Policy AND Search in tandem as that is how most of the algorithms implement them.
Previous Versions:
v1.1.0
Added model control over search parameters:
beamWidth - lets Claude adjust how many paths to track (1-10)
numSimulations - fine-tune MCTS simulation count (1-150)
Features
- Two search strategies that you can switch between:
- Beam search (good for straightforward stuff)
- MCTS (when stuff gets complex) with alpha variations (see above)
- Tracks how good different reasoning paths are
- Maps out all the different ways Claude thinks through problems
- Analyzes how the reasoning process went
- Follows the MCP protocol (obviously)
Installation
git clone https://github.com/frgmt0/mcp-reasoner.git
OR clone the original:
git clone https://github.com/Jacck/mcp-reasoner.git
cd mcp-reasoner
npm install
npm run buildConfiguration
Add to Claude Desktop config:
{
"mcpServers": {
"mcp-reasoner": {
"command": "node",
"args": ["path/to/mcp-reasoner/dist/index.js"],
}
}
}Testing
Benchmarks
Benchmarking will be added soon
Key Benchmarks to test against:
MATH500
GPQA-Diamond
GMSK8
Maybe Polyglot &/or SWE-Bench
License
This project is licensed under the MIT License - see the LICENSE file for details.