@pinkpixel/codian-context-mcp v1.5.3
Codian Context MCP
A comprehensive context-aware memory system for AI assistants using the Model Context Protocol (MCP).
🚀 CODIAN CONTEXT MCP 🚀
Transform Your Development Process with AI-Powered Context Awareness
Overview
Codian Context MCP provides persistent context awareness for AI assistants using the Model Context Protocol (MCP). It maintains a comprehensive memory system that stores conversations, project milestones, decisions, requirements, and code snippets, allowing AI assistants to maintain context across sessions.
Key Components
- MCP Server: Implements the Model Context Protocol to register tools and process requests
- Memory Database: Supports multiple database backends for persistent storage across sessions
- Memory Subsystems: Organizes memory into specialized systems with distinct purposes
- Vector Embeddings: Transforms text and code into numerical representations for semantic search
Memory Types
The system implements four complementary memory types:
Short-Term Memory (STM)
- Stores recent messages and active files
- Provides immediate context for current interactions
- Automatically prioritizes by recency and importance
Long-Term Memory (LTM)
- Stores project milestones, decisions, and requirements
- Maintains persistent knowledge across sessions
- Organizes information by importance and relevance
Episodic Memory (EM)
- Records sequences of actions and events
- Captures the narrative flow of development
- Helps understand the history and evolution of the project
Semantic Memory (SM)
- Uses vector embeddings for similarity-based retrieval
- Finds conceptually related information across all memory types
- Enables natural language queries for relevant context
Features
- Persistent Context: Maintains conversation and project context across multiple sessions
- Importance-Based Storage: Prioritizes information based on configurable importance levels
- Multi-Dimensional Memory: Combines short-term, long-term, episodic, and semantic memory systems
- Comprehensive Retrieval: Provides unified context from all memory subsystems
- Health Monitoring: Includes built-in diagnostics and status reporting
- Banner Generation: Creates informative context banners for conversation starts
- Flexible Database Options: Supports LanceDB (local), Neon (cloud), and in-memory storage
- Advanced Vector Embeddings: Uses Hugging Face Transformers.js for high-quality semantic embeddings
- Code Indexing: Automatically indexes code files and extracts meaningful snippets
- Modular Architecture: Organized into logical components for maintainability and extensibility
Installation
Prerequisites
- Node.js 18 or higher
- npm or yarn package manager
Database Options
The system supports multiple database backends:
- LanceDB (Local) - Default option, stores data locally with vector search capabilities
- Neon (Cloud) - PostgreSQL database with pgvector extension for vector search
- In-Memory - Fallback option that doesn't persist data between sessions
Setup Steps
Option 1: LanceDB (Local Storage)
This is the default and simplest option, requiring no external services:
# Create a directory for LanceDB data
mkdir -p data/lancedb
# Set environment variables
echo "DB_TYPE=lancedb" > .env
echo "LANCEDB_PATH=./data/lancedb" >> .envOption 2: Neon Database (Cloud Storage)
For persistent cloud storage with PostgreSQL:
- Create a Neon account at Neon
- Create a new project
- Enable the pgvector extension in your project
- Get your connection string from the Neon dashboard
- Configure your environment:
# Set environment variables
echo "DB_TYPE=neon" > .env
echo "NEON_CONNECTION_STRING=postgres://user:password@your-neon-db.neon.tech/neondb" >> .env- Configure MCP:
Update .cursor/mcp.json in your project directory with the appropriate database configuration:
{
"mcpServers": {
"codian-context-mcp": {
"command": "npx",
"args": ["@pinkpixel/codian-context-mcp"],
"enabled": true,
"env": {
"DB_TYPE": "lancedb",
"LANCEDB_PATH": "./data/lancedb"
}
}
}
}For Neon database, use:
{
"mcpServers": {
"codian-context-mcp": {
"command": "npx",
"args": ["@pinkpixel/codian-context-mcp"],
"enabled": true,
"env": {
"DB_TYPE": "neon",
"NEON_CONNECTION_STRING": "postgres://user:password@your-neon-db.neon.tech/neondb"
}
}
}
}- Install the package:
npm install @pinkpixel/codian-context-mcpOr install globally:
npm install -g @pinkpixel/codian-context-mcpUsage
MCP Tools
The system provides the following MCP tools:
System Tools
generateBanner: Generates a banner containing memory system statistics and statuscheckHealth: Checks the health of the memory system and its databasegetMemoryStats: Retrieves statistics about the memory system
Short-Term Memory Tools
initConversation: Initializes a new conversation and returns comprehensive contextstoreMessage: Stores a message in short-term memorygetRecentMessages: Retrieves recent messages from short-term memorytrackFile: Tracks an active file in the workspacegetActiveFiles: Retrieves active files in the workspace
Long-Term Memory Tools
storeMilestone: Stores a project milestone in long-term memorygetMilestones: Retrieves project milestones from long-term memorystoreDecision: Stores a project decision in long-term memorygetDecisions: Retrieves project decisions from long-term memorystoreRequirement: Stores a project requirement in long-term memorygetRequirements: Retrieves project requirements from long-term memory
Episodic Memory Tools
storeEpisode: Stores an episode in episodic memorygetEpisodes: Retrieves episodes from episodic memory
Semantic Memory Tools
getComprehensiveContext: Retrieves comprehensive context from all memory subsystemsmanageVector: Manages vector embeddings for semantic searchvectorMaintenance: Performs maintenance operations on vector embeddings
Code Indexing Tools
indexCode: Indexes code files for semantic searchsearchCode: Searches indexed code files and snippets
Example Usage
// Initialize a conversation
const result = await mcp_codian_context_initConversation({
content: "Tell me about the project structure",
importance: "medium"
});
// Store a milestone
const milestone = await mcp_codian_context_storeMilestone({
title: "Initial Setup Complete",
description: "Configured the development environment and set up the project structure",
importance: "high"
});
// Get comprehensive context
const context = await mcp_codian_context_getComprehensiveContext({
query: "project setup"
});Vector Embeddings
The system supports two types of vector embeddings for semantic search:
Transformer Embeddings (Default)
By default, the system uses Hugging Face Transformers.js with the Xenova/all-MiniLM-L6-v2 model to generate high-quality semantic embeddings. This model:
- Produces 384-dimensional vectors that capture semantic meaning
- Understands context and relationships between words
- Provides excellent performance for semantic search
- Downloads and caches the model on first use (approximately 50MB)
To use transformer embeddings, set the following environment variables:
EMBEDDING_TYPE=transformer
EMBEDDING_MODEL=Xenova/all-MiniLM-L6-v2
VECTOR_DIMENSIONS=384Note: The first time you use transformer embeddings, the model will be downloaded. This requires internet access and may take a few moments. The model is cached for future use.
Basic Embeddings (Fallback)
For environments where downloading models is not possible or for lightweight usage, the system includes a basic embedding method:
- Uses a deterministic hashing approach
- Requires no external dependencies or downloads
- Produces vectors that maintain some similarity properties
- Not as semantically accurate as transformer embeddings
To use basic embeddings, set:
EMBEDDING_TYPE=basic
VECTOR_DIMENSIONS=128Database Schema
The system uses the following database tables:
messages: Stores conversation messagesid: Unique identifierrole: Message role (user, assistant, system)content: Message contentcreated_at: Creation timestampmetadata: Additional metadataimportance: Importance level
active_files: Tracks active files in the workspaceid: Unique identifierfilename: Path to the filelast_accessed: Last access timestampmetadata: Additional metadata
milestones: Tracks project milestonesid: Unique identifiertitle: Milestone titledescription: Milestone descriptionimportance: Importance levelcreated_at: Creation timestampmetadata: Additional metadata
decisions: Records project decisionsid: Unique identifiertitle: Decision titlecontent: Decision contentreasoning: Decision reasoningimportance: Importance levelcreated_at: Creation timestampmetadata: Additional metadata
requirements: Maintains project requirementsid: Unique identifiertitle: Requirement titlecontent: Requirement contenttimestamp: Creation timestampimportance: Importance level
episodes: Chronicles actions and eventsid: Unique identifiertimestamp: Creation timestampactor: Actor performing the actionaction: Type of actioncontent: Action detailsimportance: Importance levelcontext: Action context
vectors: Stores vector embeddings for semantic searchid: Unique identifiercontent_id: ID of the referenced contentcontent_type: Type of content (message, file, snippet)vector: Binary representation of the embedding vectormetadata: Additional metadata for the vector
code_files: Tracks indexed code filesid: Unique identifierfile_path: Path to the filelanguage: Programming languagelast_indexed: Timestamp of last indexingmetadata: Additional file metadata
code_snippets: Stores extracted code structuresid: Unique identifierfile_id: Reference to the parent filestart_line: Starting line numberend_line: Ending line numbersymbol_type: Type of code structure (function, class, variable)content: The code snippet content
Configuration
The system can be configured using environment variables:
Database Configuration
DB_TYPE: Database type to use (lancedb, neon, in-memory)LANCEDB_PATH: Path to the LanceDB database (when using lancedb)NEON_CONNECTION_STRING: Connection string for Neon database (when using neon)
Embedding Configuration
EMBEDDING_TYPE: Type of embedding to use (transformer, basic)EMBEDDING_MODEL: Model to use for transformer embeddings (default: Xenova/all-MiniLM-L6-v2)VECTOR_DIMENSIONS: Dimensionality of vector embeddings (default: 384 for transformer, 128 for basic)
Server Configuration
MCP_LOG_LEVEL: Logging level (error, warn, info, debug)MCP_SERVER_NAME: Name of the MCP server (default: "codian-context-mcp")MCP_SERVER_VERSION: Version of the MCP server (default: "1.4.0")
Troubleshooting
Common Issues
Database Connection Problems
- For LanceDB: Verify the database path exists and is writable
- For Neon: Verify your connection string is correct and the pgvector extension is enabled
- Check network connectivity for cloud databases
- Verify firewall settings allow the connection
Missing Data
- Check that data was stored with appropriate importance level
- Verify the retrieval query parameters (limit, filters)
- Check the database health with
mcp_codian_context_checkHealth()
Performance Issues
- Monitor memory statistics with
mcp_codian_context_getMemoryStats() - Consider archiving old data if database grows too large
- Optimize retrieval by using more specific filters
- Monitor memory statistics with
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.