0.1.1 • Published 6 months ago
@imoksan/deep-research-mcp v0.1.1
Deep Research MCP Tool for Cursor AI
Overview
This project is a modular, extensible npm package implementing a Deep Research MCP (Model Context Protocol) tool for use with Cursor AI. It enables robust, autonomous research workflows, integrating real-time web search via the Tavilly API and following best practices for MCP tool development.
Key Requirements
- Modular, extensible architecture for easy refinement and feature addition
- Strict adherence to the deep research process (planning, autonomous search, synthesis, reporting)
- Real data only (no simulation), with Tavilly as the sole external API
- Full compatibility with Cursor AI's agent workflows and approval mechanisms
Architecture
- Core MCP Server: Provides stdio/HTTP transport for Cursor AI integration
- Deep Research Orchestrator: Handles query analysis, planning, and workflow
- Tavilly API Integration: Real web search, robust error handling, and retries
- Research Tools: Web search, data extraction, synthesis, reporting, and citation
- Extensibility Points: Easy addition of new tools/resources
- Config/Environment Management: API keys, runtime options
Setup
- Clone the repository and install dependencies:npm install
- Build the project:npm run build
- Start the MCP server (for Cursor AI stdio integration):npm start
- Configure your Tavilly API key in a .envfile:TAVILLY_API_KEY=your_api_key_here
Testing & Development Workflow
- Test Runner: This project uses Jest for unit and integration testing.
- Test Files: All test files are located in the tests/directory and follow the.test.tsnaming convention.
- Running Tests:npm test
- Adding Tests: Add new test files or cases in the tests/directory. Use test-driven development (TDD) to ensure extensibility and robustness as you add new features or tools.
- Best Practices:- Write tests for every new tool, orchestrator function, or integration.
- Use mocks and stubs for external APIs and server interactions where needed.
- Keep tests up to date as the codebase evolves.
 
Next Steps
- Implement the Deep Research Orchestrator module
- Add Tavilly API integration with robust error handling
- Develop research tools for web search, extraction, synthesis, and reporting
- Write tests for all modules and ensure robust, real-world data handling
- Document extensibility points for future features
For more details, see cursor-mcp.md and dr-process.md in the project root.