@asaidimu/anansi v3.0.0
Anansi Schema Evolution Platform
A comprehensive toolkit for advanced data modelling, schema evolution, and adaptive persistence management in complex enterprise systems.
Table of Contents
- Why the Name "Anansi"?
- Theoretical Foundation
- Overview & Features
- Installation & Setup
- Usage Documentation
- Project Architecture
- Development & Contributing
- Additional Information
Why the Name "Anansi"?
Named after the legendary Akan trickster god of West Africa, our platform embodies Anansi's core attributes:
- Wisdom in Complexity: Like Anansi weaving intricate webs, this platform navigates the complex landscapes of enterprise data systems.
- Adaptive Intelligence: Anansi's legendary ability to transform and adapt mirrors our platform's approach to schema evolution.
- Storytelling of Systems: In Akan folklore, Anansi is a master storyteller who can unravel and reweave narrativesβjust as our platform manages the intricate narratives of data models.
The name reflects our philosophical approach: transformative, intelligent, and deeply respectful of the complex systems we seek to understand and evolve.
Theoretical Foundation
A mathematically rigorous framework for managing data model complexity, grounded in advanced theoretical principles of system evolution and distributed computing.
Overview & Features
Detailed Description
Anansi is a comprehensive platform built to tackle the challenges of modern enterprise data management. It provides a principled approach to schema design, evolution, and system integration, moving beyond simple CRUD operations to offer a robust, version-controlled environment for your data models. By treating schemas as living, evolving constructs, Anansi enables systematic transformations while preserving system-wide data integrity. Its in-memory capabilities, backed by a persistent Git-enabled registry, ensure both high performance and reliable versioning for collaborative development and deployment. Anansi is ideal for organizations navigating complex, rapidly evolving, and distributed system architectures.
Key Conceptual Components
- π· Schema Registry: Inspired by the intricate webs of knowledge, this component offers comprehensive metadata tracking and explicit dependency management for all your data schemas.
- π§ Theoretical Migration Framework: Provides formal methods for schema evolution, guaranteeing atomic transformations and supporting bidirectional migrations to ensure data consistency across versions.
- π Architectural Abstractions: Enables decoupled schema representation, cross-system consistency models, and adaptive persistence strategies to integrate seamlessly into diverse architectural landscapes.
Key Features
- Schema Definition & Validation: Define complex data structures using a rich
SchemaDefinition
interface, complete with fields, nested schemas, constraints, and indexes. Leverage a powerful, auto-generated validation SDK to ensure data integrity against your defined schemas. - Schema Evolution & Migration: Manage schema changes over time with a robust
MigrationEngine
. Define schema changes and data transformations to seamlessly evolve your data models forward and backward, preserving data consistency. - Flexible Persistence Adapters: Interact with various data stores through a unified
Persistence
interface. Includes an in-memory ephemeral persistence layer for rapid prototyping and a production-ready PocketBase adapter with built-in retry mechanisms and error categorization. - Schema Registry with Version Control: Store, manage, and version your schemas using a
SchemaRegistry
that can operate purely in-memory (backed byLightningFS
) or integrate with Git for distributed version control, branching, and tagging. - Code Generation & Utilities: Automatically generate TypeScript types (
schemaToTypes
) and human-readable documentation (docgen
) directly from yourSchemaDefinition
files. Includes essential utilities for cryptographic hashing, deep merging, and JSON Patch operations. - Event-driven Data Operations: Subscribe to granular persistence events (e.g.,
create:success
,update:failed
) and define triggers or scheduled tasks to automate workflows around data changes.
Installation & Setup
Prerequisites
- Node.js: Version 18.x or higher.
- Bun: Recommended for faster installation and script execution. (Alternatively, npm/yarn can be used).
- TypeScript: For developing with Anansi and its generated types.
- PocketBase (Optional): If using the PocketBase persistence adapter, a running PocketBase instance is required.
Installation Steps
Install Anansi into your project using Bun (recommended) or npm:
# Using Bun
bun add @asaidimu/anansi
# Using npm
npm install @asaidimu/anansi
Configuration
Ephemeral Persistence (In-memory)
For local development and testing, you can use the ephemeral persistence layer which requires no external configuration:
import { createEphemeralPersistence } from '@asaidimu/anansi';
// Define your predicate map (if using custom constraints)
const myPredicates = {
isPositive: ({ data, field, arguments: min }) => data[field] > min,
};
// Create a new in-memory persistence instance
const persistence = createEphemeralPersistence({}, myPredicates); // No functionMap needed for basic use
// Now you can create and manage collections in memory
Git-backed Schema Registry
To enable persistent, version-controlled schema management, you'll configure createGitSchemaRegistry
with a remote Git repository (e.g., GitHub, Gitea). This will require authentication credentials.
import { createGitSchemaRegistry, createGithubRepository } from '@asaidimu/anansi';
// Example for GitHub
const remoteRepo = await createGithubRepository({
username: process.env.GITHUB_USERNAME!,
password: process.env.GITHUB_PAT!, // Personal Access Token
repository: 'my-anansi-schemas',
create: true, // Auto-create if not exists
});
const gitRegistry = await createGitSchemaRegistry('/my-schema-registry', {
remote: remoteRepo,
author: { name: 'Anansi Bot', email: 'anansi-bot@example.com' },
// proxy: 'https://cors.isomorphic-git.org', // Uncomment if encountering CORS issues
});
await gitRegistry.init(); // Initialize the local Git repository and sync with remote
PocketBase Persistence
To use PocketBase, you need to provide the PocketBase URL and optionally an auth token:
import { createPocketBasePersistence } from '@asaidimu/anansi/sdk/pocketbase'; // Note: path to specific SDK
import { createSchemaMigrationHelper } from '@asaidimu/anansi/lib/schema/helpers'; // Note: direct import for helper
const pocketBasePersistence = createPocketBasePersistence({
url: 'http://127.0.0.1:8090', // Your PocketBase instance URL
// authToken: 'YOUR_POCKETBASE_AUTH_TOKEN', // Optional: if auth is required
env: 'development', // 'development' for migrations/rollbacks, 'production' for read-only
});
// Use pocketBasePersistence like any other Anansi persistence instance
Usage Documentation
Defining a Schema
Schemas are defined using the SchemaDefinition
interface. Here's a basic example:
import { SchemaDefinition } from '@asaidimu/anansi';
const UserSchema: SchemaDefinition = {
name: "User",
version: "1.0.0",
description: "Represents a user in the system",
fields: {
id: { name: "id", type: "string", required: true, description: "Unique user ID" },
email: {
name: "email",
type: "string",
required: true,
constraints: [{
name: "isEmail",
predicate: "isEmailFormat", // Assumes 'isEmailFormat' is in your predicateMap
parameters: /^[^\s@]+@[^\s@]+\.[^\s@]+$/,
errorMessage: "Must be a valid email address."
}],
description: "User's email address"
},
status: {
name: "status",
type: "enum",
values: ["active", "inactive", "pending"],
default: "pending",
description: "User's current status"
},
profile: {
name: "profile",
type: "object",
schema: { id: "UserProfile" }, // References a nested schema
required: false,
description: "User's profile details"
},
roles: {
name: "roles",
type: "set",
itemsType: "string",
description: "Set of unique roles assigned to the user"
}
},
nestedSchemas: {
UserProfile: {
name: "UserProfile",
fields: {
firstName: { name: "firstName", type: "string", required: true },
lastName: { name: "lastName", type: "string", required: true },
age: { name: "age", type: "number", required: false, default: 18 }
}
}
},
indexes: [
{ name: "emailIndex", fields: ["email"], type: "unique", description: "Ensures unique email addresses" }
],
constraints: [
{
name: "ageConstraint",
operator: "and",
rules: [
{ name: "minAge", predicate: "min", field: "profile.age", parameters: 18 },
{ name: "maxAge", predicate: "max", field: "profile.age", parameters: 120 }
]
}
],
mock: (faker) => ({
id: faker.string.uuid(),
email: faker.internet.email(),
status: faker.helpers.arrayElement(["active", "inactive", "pending"]),
profile: {
firstName: faker.person.firstName(),
lastName: faker.person.lastName(),
age: faker.number.int({ min: 18, max: 99 })
},
roles: faker.helpers.arrayElements(["admin", "user", "guest"], { min: 1, max: 3 })
})
};
Ephemeral Persistence
Interact with data using the in-memory persistence layer:
import { createEphemeralPersistence, SchemaDefinition } from '@asaidimu/anansi';
const myPredicates = {
isEmailFormat: ({ data, field, arguments: regex }) => regex.test(data[field]),
min: ({ data, field, arguments: minValue }) => data[field] >= minValue,
max: ({ data, field, arguments: maxValue }) => data[field] <= maxValue,
};
const persistence = createEphemeralPersistence({}, myPredicates);
// Assume UserSchema is defined as above
const usersCollection = await persistence.createCollection<typeof UserSchema>({
name: UserSchema.name,
version: UserSchema.version,
fields: UserSchema.fields,
nestedSchemas: UserSchema.nestedSchemas,
constraints: UserSchema.constraints,
indexes: UserSchema.indexes,
});
// π Create
const newUser = await usersCollection.create({
data: {
id: 'user123',
email: 'test@example.com',
status: 'active',
profile: { firstName: 'John', lastName: 'Doe', age: 30 },
roles: ['admin', 'user'],
},
});
console.log('Created user:', newUser);
// β‘ Read
const activeUsers = await usersCollection.read({
query: { filters: { status: { $eq: 'active' } } },
});
console.log('Active users:', activeUsers);
// π Update
const updatedUsers = await usersCollection.update({
query: { email: { $eq: 'test@example.com' } },
data: { status: 'inactive' },
});
console.log('Updated users:', updatedUsers);
// ποΈ Delete
const deletedCount = await usersCollection.delete({
query: { id: { $eq: 'user123' } },
});
console.log('Deleted users count:', deletedCount);
// β¨ Validate
const invalidUser = { id: 'invalid', email: 'bad-email' };
const validationResult = usersCollection.validate(invalidUser);
console.log('Validation issues:', validationResult.issues);
Schema Registry (Git-backed)
Manage your schemas persistently with Git integration.
import {
createGitSchemaRegistry,
createGithubRepository,
SchemaDefinition,
} from '@asaidimu/anansi';
import { UserSchema } from './my-schemas'; // Assuming UserSchema is defined
// Configure remote repository (e.g., GitHub)
const githubRepo = await createGithubRepository({
username: 'your-github-username',
password: 'your-github-pat', // Use a Personal Access Token
repository: 'anansi-schemas',
create: true, // Create the repo if it doesn't exist
});
// Create Git-backed schema registry
const registry = await createGitSchemaRegistry('/anansi-schemas', {
remote: githubRepo,
author: { name: 'Anansi Bot', email: 'bot@anansi.com' },
});
// Initialize the registry (clones/creates local repo, syncs)
await registry.init();
// π· Create a schema
await registry.create({ schema: UserSchema });
console.log(`Schema '${UserSchema.name}' created/updated in registry.`);
// π List all schemas
const allSchemas = await registry.list();
console.log('All schemas:', allSchemas);
// π Retrieve a schema definition
const retrievedSchema = await registry.schema({ name: 'User' });
console.log('Retrieved User schema:', retrievedSchema?.version);
// π Update a schema
const updatedUserSchema: SchemaDefinition = {
...UserSchema,
version: "1.1.0",
fields: {
...UserSchema.fields,
phone: { name: "phone", type: "string", required: false, description: "User's phone number" }
}
};
await registry.update({ schema: updatedUserSchema });
console.log(`Schema '${updatedUserSchema.name}' updated to v${updatedUserSchema.version}.`);
// β‘ Sync local changes with remote Git repository
await registry.sync();
console.log('Registry synced with remote Git repository.');
// ποΈ Delete a schema
// await registry.delete({ name: 'User' });
// console.log(`Schema 'User' deleted.`);
Schema Migration
Apply and roll back schema changes, including data transformations.
import { createEphemeralPersistence, createSchemaMigrationHelper, SchemaDefinition, DataTransform } from '@asaidimu/anansi';
// Assume UserSchema is defined as above
const oldUserSchema: SchemaDefinition = {
...UserSchema,
version: "1.0.0", // Original version
fields: {
id: { name: "id", type: "string", required: true },
oldEmail: { name: "oldEmail", type: "string", required: false } // Field to be migrated
},
nestedSchemas: {}
};
const persistence = createEphemeralPersistence({}, {});
const usersCollection = await persistence.createCollection<any>({
name: oldUserSchema.name,
version: oldUserSchema.version,
fields: oldUserSchema.fields,
nestedSchemas: oldUserSchema.nestedSchemas
});
// Add some dummy data with oldEmail
await usersCollection.create({
data: [
{ id: 'u1', oldEmail: 'user1@old.com', status: 'active' },
{ id: 'u2', oldEmail: 'user2@old.com', status: 'inactive' }
]
});
// Define the migration
const addEmailMigration = (h: ReturnType<typeof createSchemaMigrationHelper>) => {
// Define forward transformation: oldEmail -> email
const forwardTransform: DataTransform<any, any>['forward'] = (data) => ({
...data,
email: data.oldEmail,
oldEmail: undefined // Remove old field
});
// Define backward transformation: email -> oldEmail
const backwardTransform: DataTransform<any, any>['backward'] = (data) => ({
...data,
oldEmail: data.email,
email: undefined // Remove new field
});
h.addField('email', { name: 'email', type: 'string', required: true, description: 'New email field' });
h.removeField('oldEmail'); // Deprecate/remove old field
return { forward: forwardTransform, backward: backwardTransform };
};
// β‘οΈ Perform a dry run migration
console.log('\n--- Dry Run Migration (Forward) ---');
const { newSchema: dryRunSchema, dataPreview } = await usersCollection.migrate(
'Migrate oldEmail to email',
addEmailMigration,
true // dryRun = true
);
console.log('Dry run new schema version:', dryRunSchema.version);
const previewRecords = await new Response(dataPreview).json(); // Read from stream
console.log('Dry run data preview:', previewRecords);
// π Apply the actual migration
console.log('\n--- Applying Migration (Forward) ---');
await usersCollection.migrate(
'Migrate oldEmail to email',
addEmailMigration,
false // dryRun = false
);
console.log('Migration applied. Current schema version:', usersCollection.schema().version);
const migratedData = await usersCollection.read({});
console.log('Migrated data:', migratedData);
// βͺ Rollback the migration (if supported by persistence)
console.log('\n--- Rolling Back Migration ---');
await usersCollection.rollback(undefined, false); // Rollback to previous version
console.log('Rolled back. Current schema version:', usersCollection.schema().version);
const rolledBackData = await usersCollection.read({});
console.log('Rolled back data:', rolledBackData);
Type Generation
Generate TypeScript types from your schema definitions.
import { schemaToTypes, SchemaDefinition } from '@asaidimu/anansi';
const ProductSchema: SchemaDefinition = {
name: "Product",
version: "1.0.0",
fields: {
id: { name: "id", type: "string", required: true },
name: { name: "name", type: "string", required: true },
price: { name: "price", type: "number", required: true },
currency: { name: "currency", type: "enum", values: ["USD", "EUR", "GBP"] },
details: {
name: "details",
type: "object",
schema: { id: "ProductDetails" }
}
},
nestedSchemas: {
ProductDetails: {
name: "ProductDetails",
fields: {
weight: { name: "weight", type: "number" },
dimensions: {
name: "dimensions",
type: "object",
schema: { id: "ProductDimensions" }
}
}
},
ProductDimensions: {
name: "ProductDimensions",
fields: {
length: { name: "length", type: "number" },
width: { name: "width", type: "number" },
height: { name: "height", type: "number" }
}
}
}
};
const generatedTypes = schemaToTypes(ProductSchema);
console.log(generatedTypes);
/*
// Output will be similar to:
export type ProductDimensions = {
length?: number;
width?: number;
height?: number;
};
export type ProductDetails = {
weight?: number;
dimensions?: ProductDimensions;
};
export type ProductCurrency = "USD" | "EUR" | "GBP";
export type Product = {
id: string;
name: string;
price: number;
currency: ProductCurrency;
details?: string | ProductDetails; // "string" for concrete schemas reference
};
export enum ProductIndexNames {
...
}
*/
Documentation Generation
Generate markdown documentation for your schemas.
import { docgen, SchemaDefinition } from '@asaidimu/anansi';
import { faker } from '@faker-js/faker';
// Assume ProductSchema is defined as above
const productDoc = docgen(ProductSchema, { faker });
console.log(productDoc);
/*
// Output will be similar to:
# Product Schema (Version 1.0.0)
## Metadata
- **Dependencies:** None
- **Created:** 2024-01-01T00:00:00.000Z
## Fields
| Name | Type | Required | Default | Description | Deprecated | Unique | Constraints |
|----------|--------|----------|---------|--------------------|------------|--------|-------------|
| id | string | Yes | `None` | | No | No | 0 |
| name | string | Yes | `None` | | No | No | 0 |
| price | number | Yes | `None` | | No | No | 0 |
| currency | enum | Yes | `"USD"` | | No | No | 0 |
| details | object | No | `None` | | No | No | 0 |
### Nested Schema: ProductDetails
#### weight (number)
**Required:** No
#### dimensions (object)
##### length (number)
**Required:** No
##### width (number)
**Required:** No
##### height (number)
**Required:** No
## Indexes
| Name | Type | Fields | Unique | Order | Partial Condition | Description |
|------|--------|---------|--------|-------|-------------------|-------------|
| ... | ... | ... | ... | ... | ... | ... |
## Constraints
### Schema-level Constraints
...
## Migrations
| ID | Description | Status | Changes |
|----|-------------|--------|---------|
| ...| ... | ... | ... |
## Example Data
```json
{
"id": "e221b3a4-c5d6-7890-a1b2-c3d4e5f67890",
"name": "Ergonomic Widget",
"price": 99.99,
"currency": "USD",
"details": {
"weight": 0.5,
"dimensions": {
"length": 10,
"width": 5,
"height": 2
}
}
}
*/
Project Architecture
Anansi is structured to provide a modular and extensible platform for data model management.
Directory Structure
.
βββ src/
β βββ lib/ # Core libraries for persistence, registry, migration, schema
β β βββ persistence/ # In-memory and adapter-based data persistence (EmphemeralCollection)
β β βββ registry/ # Schema Registry (LightningFS + Git integration)
β β βββ migration/ # Schema migration engine (MigrationEngine)
β β βββ schema/ # Schema validation, helpers, and utilities
β βββ sdk/ # Specific SDK implementations (e.g., PocketBase adapter, static validators)
β βββ types/ # Core TypeScript interfaces (SchemaDefinition, Persistence, Migration, etc.)
β βββ tools/ # General utilities (crypto, merge, patch, typegen, docgen, validator, version)
βββ docs/ # VitePress documentation site
βββ tests/ # Unit and integration tests
βββ public/ # Public assets for UI (e.g., Vite SVG)
βββ index.ts # Main entry point for the Anansi library
βββ package.json # Project metadata and dependencies
βββ dist.package.json # Package.json for distributed npm package
βββ vitest.config.ts # Vitest configuration for testing
βββ vite.config.ts # Vite configuration for UI development
βββ tsconfig.json # TypeScript configuration
Core Components
SchemaDefinition
(src/types/schema-definition.ts
): The central contract defining the structure of data models, including fields, nested schemas, constraints, and migrations.Persistence
(src/types/persistence.ts
): An abstract interface for all data storage operations (create, read, update, delete, subscribe), designed for pluggable backends.EmphemeralCollection
(src/lib/persistence/collection.ts
): An in-memory implementation ofPersistenceCollection
for rapid development.- PocketBase Adapter (
src/sdk/pocketbase/index.tsx
): A concretePersistence
implementation for PocketBase, handling schema-driven collection management, migrations, and eventing.
MigrationEngine
(src/lib/migration/index.ts
): Manages the evolution of schemas over time, applying schema changes and data transformations in a controlled, versioned manner.SchemaRegistry
(src/lib/registry/registry.ts
): The core component for storing and versioning schema definitions. It usesLightningFS
for in-memory storage.createGitSchemaRegistry
(src/lib/registry/git-registry.ts
): A factory function that wrapsSchemaRegistry
withisomorphic-git
to provide persistent, distributed version control for schemas.
Validators
(src/tools/validator.ts
,src/lib/schema/validators.ts
): Provides utilities for validating data againstSchemaDefinition
rules and for validating the schema definitions themselves.Type Generators
(src/tools/typegen.ts
): Automatically generates TypeScript type definitions fromSchemaDefinition
files, ensuring strict type safety across your codebase.Documentation Generator
(src/tools/docgen.ts
): Generates human-readable Markdown documentation for schemas, including fields, indexes, constraints, and example mock data.
Data Flow
- Schema Definition: Developers define data models using the
SchemaDefinition
interface. - Schema Registration: Schemas are added to the
SchemaRegistry
(either in-memory or Git-backed) for version control and discovery. - Persistence Layer: A
Persistence
instance (e.g.,EphemeralPersistence
,PocketBasePersistence
) is initialized with optional predicates and functions for query and validation. - Collection Interaction: Developers interact with collections (e.g.,
usersCollection.create()
,usersCollection.read()
) through thePersistenceCollection
interface. - Validation: All data operations trigger internal validation against the collection's
SchemaDefinition
using thecreateStandardSchemaValidator
. - Schema Evolution: When data models change,
SchemaChanges
are defined, and theMigrationEngine
applies these changes to both the schema definition and existing data. - Synchronization (Git): For Git-backed registries, changes are committed and pushed, enabling collaborative schema evolution and traceability.
Extension Points
- Custom Persistence Adapters: Implement the
Persistence
andPersistenceCollection
interfaces to integrate Anansi with any new data storage backend. - Custom Predicates: Define custom validation logic (e.g.,
isEmailFormat
,isStrongPassword
) and provide them tocreateEphemeralPersistence
or any other persistence implementation. - Data Transforms: Write custom
forward
andbackward
transformation functions for migrations, enabling complex data shape changes between schema versions. - Schema Hints: Extend
InputHint
andSchemaHint
to guide UI generation or other tooling based on schema metadata.
Development & Contributing
Development Setup
To set up the project for local development:
- Clone the repository:
git clone https://github.com/asaidimu/data-model.git anansi cd anansi
- Install dependencies using Bun (recommended):
If you don't have Bun, you can use npm:bun install
npm install
Available Scripts
The package.json
includes several scripts for development workflows:
bun ci
: Installs dependencies.bun clean
: Removes thedist
directory.bun prebuild
: Cleans and runs./.sync-package.ts
.bun build
: Compiles TypeScript files todist/
for CJS and ESM formats, generates declaration files, and minifies.bun build:watch
: Runsbuild
in watch mode for continuous compilation.bun postbuild
: CopiesREADME.md
,LICENSE.md
, anddist.package.json
into thedist
directory.bun test
: Runs unit and integration tests using Vitest.bun test:ci
: Runs Vitest tests in CI mode (runs once, exits).bun test:debug
: Runs Vitest with debugger attached.bun docs:dev
: Starts the VitePress development server for documentation.bun docs:build
: Builds the static VitePress documentation site.bun ui:dev
: Starts the Vite development server for the example UI.bun docs:preview
: Previews the built documentation site.
Testing
Anansi uses Vitest for its test suite.
To run all tests:
bun test
To run tests in CI mode (non-interactive):
bun test:ci
Tests include coverage checks and are configured to run in a happy-dom
environment with fake-indexeddb
for browser-like filesystem operations (LightningFS
).
Contributing Guidelines
We welcome contributions! Please follow these guidelines:
- Fork the repository and clone your fork.
- Create a new branch for your feature or bug fix:
git checkout -b feature/my-new-feature
orbugfix/fix-some-bug
. - Make your changes, ensuring code adheres to existing style and conventions.
- Write or update tests for your changes to ensure proper functionality and maintain test coverage.
- Ensure all tests pass (
bun test
). - Use semantic commit messages (e.g.,
feat: add new feature
,fix: resolve bug
). This project usessemantic-release
. - Open a Pull Request to the
main
branch of the upstream repository.
Issue Reporting
For bugs, feature requests, or questions, please open an issue on our GitHub Issues page.
Additional Information
Troubleshooting
Buffer is not defined
error: If running in a browser environment, ensure thatwindow.Buffer = Buffer;
is included as done insrc/lib/registry/registry.ts
.- CORS issues with
isomorphic-git
or PocketBase: If interacting with remote Git repositories or PocketBase from a browser, you might need a CORS proxy. ConfigurecreateGitSchemaRegistry
orcreatePocketBasePersistence
with aproxy
URL. - Migration
TRANSFORM_ERROR
: Ensure yourDataTransform
functions are correctly defined and handle all expected input shapes. For remote transforms, verify the URL and module export. - Git
fastForwardOnly
errors: When performinggit pull
orgit push
, iffastForwardOnly
is enabled and conflicts exist, the operation might fail. Consider resolving conflicts manually or disablefastForwardOnly
if acceptable for your workflow.
FAQ
- What is Anansi primarily designed for? Anansi is designed for managing complex enterprise data models, focusing on schema evolution, data integrity, and flexible persistence across distributed systems. It's a comprehensive toolkit, not just a simple ORM or validation library.
- How does Anansi handle data transformations during migrations?
Anansi uses
DataTransform
objects within migrations, which contain explicitforward
andbackward
functions. These functions are executed on data streams to transform data shapes as the schema evolves. - Is Anansi production-ready? Yes, Anansi is built with production use cases in mind, emphasizing theoretical rigor, data integrity, and extensible architecture. The PocketBase adapter and Git-backed schema registry provide production-grade capabilities for persistence and version control.
- Can I use Anansi with other databases?
Yes, Anansi is designed with a pluggable
Persistence
interface. You can create custom adapters for any database or data source by implementing this interface.
Changelog
For a detailed history of changes, features, and bug fixes, please refer to the CHANGELOG.md file.
License
This project is licensed under the MIT License. See the LICENSE.md file for details.
Acknowledgments
Anansi draws inspiration from and builds upon several foundational technologies and concepts:
- SQL Pragmatism: For the approach to schema definition, constraints, and indexing.
isomorphic-git
: For enabling Git operations in diverse JavaScript environments.LightningFS
: For providing a performant in-memory filesystem abstraction.- PocketBase: For its powerful real-time backend capabilities, integrated via a dedicated persistence adapter.
@faker-js/faker
: For robust mock data generation capabilities.@standard-schema/spec
: For providing a standardized schema validation specification.
We are grateful to the creators and maintainers of these projects for their invaluable contributions to the open-source ecosystem.
5 months ago
5 months ago
5 months ago
6 months ago
7 months ago
7 months ago
7 months ago
7 months ago
7 months ago
7 months ago
7 months ago
7 months ago
7 months ago
7 months ago
7 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago
8 months ago