0.3.14 • Published 1 year ago

microtec-database-creator v0.3.14

Weekly downloads
-
License
ISC
Repository
-
Last release
1 year ago

Database Creator

[TOC]

1. Description

"Database creator" is a library for creating and maintaining databases. It is a mixture of automatic maintenance of the database structure with the possibility of executing migration scripts. In addition, the library takes care about index fragmentation and archiving.

The database definition is defined in a JSON file, which is passed to the library.

All methods take the following parameters:

  • sqlServerConnectionInfo: object of the type "SqlServerConnectionInfo" with the database connection information.
  • databaseDescriptionJsonFile: path to the JSON file with the database definition
  • customerDatabaseJsonFile: path to the JSON file with some customer specific data and scripts. This parameter is optional, pass undefined if it is not to be used.
  • logFileOrLogger:
    • if it is undefined or missing: logging is not used.
    • if it is a string: path to a text file which is used for logging. If the file does not exist, it will be created.
    • if it is a Logger object from the package microtec-logger: this logger is used for all loggings.
  • tableNamesPrefix: every appearance of %TABLEPREFIX% in the JSON description files will be replaced by this string. This parameter is optional, pass undefined if it is not to be used.

2. Methods

The library offers the following public methods.

2.1 checkDbStructure

checkDbStructure(sqlServerConnectionInfo: SqlServerConnectionInfo, databaseDescriptionJsonFile: string, customerDatabaseJsonFile?: string, logFileOrLogger?: string | Logger, tableNamesPrefix?: string): Promise<void>

Can throw exceptions of type Error.

This method checks the database structure and change it, if needed. Index fragmentations are not checked and not resolved. Normally this method should be called at every start of the application, before any other database interaction happens.

2.2 checkDbStructureAndIndexFragmentation

checkDbStructureAndIndexFragmentation(sqlServerConnectionInfo: SqlServerConnectionInfo, databaseDescriptionJsonFile: string, customerDatabaseJsonFile?: string, logFileOrLogger?: string | Logger, tableNamesPrefix?: string): Promise<void>

Can throw exceptions of type Error.

This method does all things the method checkDbStructure do, and in addition it checkes the fragmentation of the indexes and solves it, if needed.

The indexes are checked depending on the execution settings reorganizeIndexesIfFragmentationGreaterThanPercent and rebuildIndexesIfFragmentationGreaterThanPercent. See section "Execution settings" for details.

2.3 archive

archive(sqlServerConnectionInfo: SqlServerConnectionInfo, databaseDescriptionJsonFile: string, customerDatabaseJsonFile?: string, logFileOrLogger?: string | Logger, tableNamesPrefix?: string): Promise<void>

Can throw exceptions of type Error.

This method does all things the method checkDbStructureAndIndexFragmentation do, and in addition it archives the database depending on the archiving settings. See section "Archiving" for details.

2.4 createIndexesInArchiveDatabase and deleteIndexesInArchiveDatabase

createIndexesInArchiveDatabase(sqlServerConnectionInfo: SqlServerConnectionInfo, databaseDescriptionJsonFile: string, customerDatabaseJsonFile?: string, logFileOrLogger?: string | Logger, tableNamesPrefix?: string): Promise<void>
deleteIndexesInArchiveDatabase(sqlServerConnectionInfo: SqlServerConnectionInfo, databaseDescriptionJsonFile: string, customerDatabaseJsonFile?: string, logFileOrLogger?: string | Logger, tableNamesPrefix?: string): Promise<void>

These two special functions are made specially for the archive database. The archive database normally is created totally without primary keys and indexes. If the case happen, that the archive database will be used for searching old elements, maybe without indexes the queries take a very long time. These two functions allow to create and delete the indexes of the archive database when needed.

3. SqlServerConnectionInfo

interface SqlServerConnectionInfo {
    server: string;        // the server name
    port?: number;          // the TCP port (optional)
    instance?: string;     // the instance of the db server (optional)
    database: string;      // the database name
    user: string;          // authentication
    password: string;      // authentication
}

4. Library execution example

import path from 'path';
import { DbCreator } from 'microtec-database-creator';

try {
    await DbCreator.execute(
        {
            server: 'localhost',
            database: 'DatabaseName',
            user: 'username',
            password: 'password'
        },
        path.resolve(__dirname, './directory/db-description.json'),
        path.resolve(__dirname, './directory/customer.json'),
        path.resolve(__dirname, './directory/log.txt')
    );
}
catch (err) {
    // error handling
}

5. Optional elements

Optional object and attributes can always be omitted completely. At example, if there are no indexes for a table, there is no need to write an empty array or empty object. The complete tag can be omitted . Another example: there is no need to write "disabled": false if the element should not be disabled. Theproperty con be omitted.

6. Disable and delete elements

Almost all elements have the possibility to be disabled (add the attribute "disabled": true to the element). The effect is the same as removing it from the structure, with the advantage that it can remain in the structure. It depends on the type what happens if a element is disabled or deleted in the JSON structure:

  • removed/disabled tables are renamed to _TO_DELETE_ + name of the table
  • removed/disabled table columns are renamed to _TO_DELETE_ + name of the column
  • removed/disabled foreign keys are removed from the database
  • removed/disabled indexes are removed from the database

This ensures that no data is lost in case of a faulty configuration. To really delete a table or column, delete in a second step the renamed elements manually, or use scripts.

When a table/column marked for deletion later should be restored, it is enough to add/enable it again in the JSON structure. The object with all its data will be restored as it was before. The process explained with an example: the column c1 is removed from the JSON structure -> the column is renamed to _TO_DELETE_c1, but remains in the database -> the column c1 is added again in the JSON structure -> the column _TO_DELETE_c1 is renamed back to c1 in the database.

7. Rename tables / columns

If a table / column is to rename, Two cases can be distinguished:

  • If the change concerns only the change of upper and lower case (at example Tablename => TableName), the new name can simply be assigned without considering any other aspects.
  • In all other cases, the system must also be informed of the old name. To do this, the property oldNames (string array) must be set within the table / column definition.

Example: rename a column from columnOld to columnNew:

"columns": [
    { "name": "columnNew", "dataType": "INT", "oldNames": [ "columnOld" ] }
]

8. Comments

All not known attributes of the elements in the JSON data structure are ignored. This gives the possibility to add comments and descriptions to the elements:

{
    "comment": "A comment",
    "description": "Description of the element"
}

9. JSON file data structure

{
    "dbVersion": 2,             // required
    "recoveryModel": "simple",  // optional (see section 'recoveryModel' for details)
    "settings": {},             // optional (see section 'Execution settings' for details)
    "archiving": {              // optional (see section 'Archiving' for details)
        //...
    },
    "tableTemplates" : [        // optional
        //... 
    ]
    "tables": [                 // required
        //...
    ],
    "storedProcedures": [       // optional
        //...
    ],
    "scripts": [                // optional
        //...
    ]
}

9.1 dbVersion

This parameter is used only as trigger for the scripts. It has no effect to the automatic maintance of the database structure. The library stores the database version of the database in a table named _DB_VERSION_. If this table does not exists, the library automatically creates it and initializes it with the actual db version.

9.2 recoveryModel

Optional. The recovery model of the database. Description on microsoft page.

Possible values: "simple", "full", "bulk-logged". Defaults to "simple" if not specified.

9.3 Execution settings

{
    "settings": {
        "reorganizeIndexesIfFragmentationGreaterThanPercent": number,
        "rebuildIndexesIfFragmentationGreaterThanPercent": number

        "shrinkDatabaseOnChanges": boolean,
        "forceDatabaseShrink": boolean,
        "rebuildIndexesOnChanges": boolean,
        "forceRebuildIndexes": boolean,
        "rebuildTablesOnChanges": boolean,
        "forceRebuildTables": boolean,
        "commitTransactionAfterEachTable": boolean,
        "useUnsafeMethodToChangeColumnType": boolean,
        "useUnsafeMethodToChangeColumnTypeRowsInOneStep": number,
        "deleteAllAsToDeleteMarkedElements": boolean
    }
}

9.3.1 reorganizeIndexesIfFragmentationGreaterThanPercent

If the index fragmentation is greater than this number in percent, the index will be reorganized. Reorganizing is faster than rebuilding, but can not solve all possible problems. If the number is -1, this setting is disabled. Defaults to 10 if not specified.

Only has effect when calling the method checkDbStructureAndIndexFragmentation() or archive().

9.3.2 rebuildIndexesIfFragmentationGreaterThanPercent

If the index fragmentation is greater than this number in percent, the index will be rebuilded. Rebuilding is slower than reorganizing, but can eventually solve more problems. If the number is -1, this setting is disabled. Defaults to 30 if not specified.

Only has effect when calling the method checkDbStructureAndIndexFragmentation() or archive().

9.3.3 shrinkDatabaseOnChanges

If this option is set to true, the database is shrinked after changes in the database structure.

9.3.4 forceDatabaseShrink

If this option is set to true, the database is shrinked always, regardless of changes.

9.3.5 rebuildIndexesOnChanges

If this option is set to true, the indexes are rebuilt after changes in the database structure.

9.3.6 forceRebuildIndexes

If this option is set to true, the indexes are rebuilt always, regardless of changes.

9.3.7 rebuildTablesOnChanges

If columns are removed or renamed, the unused space is not automatically released. Even a database shrink does not release the space in this case.

This action rebuilds the table and its primary key, and so the unused space is released. It runs only when changes happened.

9.3.8 forceRebuildTables

If columns are removed or renamed, the unused space is not automatically released. Even a database shrink does not release the space in this case.

This action rebuilds the table and its primary key, and so the unused space is released. It is forced to run for every table, regardless of changes. Run it on bigger maintenances.

9.3.9 commitTransactionAfterEachTable

By default all actions are done in one single database transaction, to be sure that the database is recovered to the original state, if an error occurs. If there are many changes, this behavior can be unwanted, because the disk space is consumed too much.

If this option is set to true, each table in the database will have its own transaction. Be aware that not the whole database is restored on an error, but only the table with the error.

9.3.10 useUnsafeMethodToChangeColumnType

Changing the datatype or the nullable setting of a column on big datasets can have the effect, that the action fails due to the disk space required for this action (the transaction log can grow extremely).

If this option is set to true, changing the table works in another way: a new column is created, the old values are copied step by step to the new column, at the end the new column is renamed to the original name.

This option is very unsafe. On an error the database can be left in an undfined state. Please make a backup before using this option.

9.3.11 useUnsafeMethodToChangeColumnTypeRowsInOneStep

The number of rows processed in one step.

This option only has effect if useUnsafeMethodToChangeColumnType is true.

Larger numbers mean more speed, but more disk space consumption for the transaction log.

Defaults to 1,000,000 if not specified.

9.3.12 deleteAllAsToDeleteMarkedElements

If this option is set to true, elements are not marked as to delete, but are really deleted. Elements earlier marked as to delete will be deleted too.

9.4 tableTemplates

tableTemplates allows to define templates for columns and indexes.

{
    "tableTemplates": [
        {
            "name": "TemplateName",   // required
            "columns": [              // optional
                //...
            ],
            "indexes": [              // optional
                //...
            ],
            "triggers": [                             // optional
                //...
            ]
        },
        {
            //...
        },
        //...
    ]
}

The definitions are the same as in the "tables" section. Please look there for documentation.

9.5 tables

This array holds the database structure (tables with all their child elements).

{
    "tables": [
        {
            "name": "TableName",                      // required
            "disabled": false,                        // optional | Defaults to "false" if the tag is missing.
            "oldNames": [ "OldName1", "OldName2" ],   // optional
            "templates": [                            // optional
                //...
            ]
            "columns": [                              // required (but can be omitted if columns are coming from a template)
                //...
            ],
            "indexes": [                              // optional
                //...
            ],
            "foreignKeys": [                          // optional
                //...
            ],
            "triggers": [                             // optional
                //...
            ],
            "data": {                                 // optional
                //...
            }

        },
        {
            //...
        },
        //...
    ]
}

9.5.1 name

The table name. Required.

9.5.2 oldNames

Array of old names of this table. Is used to support renaming of the table name.

9.5.3 templates

This array is optional. Array of templates names to be used. Example: "templates": [ "Template1", "Template2"].

9.5.4 columns

This array is required, but can be omitted if columns are coming from a template.

"columns": [
    {
        "name": "ColumnName",                     // required
        "disabled": false,                        // optional | Defaults to "false" if the tag is missing.
        "oldNames": [ "OldName1", "OldName2" ],   // optional | Array of old names of this column. Is used to support renaming of the column name.
        "dataType": "NVARCHAR(50)",               // required | The datatype of the column (case insensitive). Documentation: <https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-types-transact-sql?view=sql-server-ver16>
        "defaultValue": "DefVal",                 // optional | Can be a string or a number (all types except numbers are represented as string). See details below.
        "isNullable": false,                      // optional | Defaults to "false" if the tag is missing.
        "isPrimaryKey": false,                    // optional | Define if this column is or is part of the primary key. Defaults to "false" if the tag is missing.
        "isAutoIncrement": false                  // optional | Define if this field is the auto increment (identity) column of this table. There can be only one auto increment field per table. If "true", the field is automatically part of the primary key, setting also the tag "isPrimaryKey" is not needed. Defaults to "false" if the tag is missing.
        "isUnique": false,                        // optional | Define if the data in this column(s) must be unique. Can be of type boolean or string. See description below. Defaults to "false" if the tag is missing.
    },
    {
        //...
    }
]

dataType: https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-types-transact-sql?view=sql-server-ver16

defaultValue: Can be a string or a number (all types except numbers are represented as string. Example for DateTime: "2000-01-01 08:00:00"). For date and time types there is the possibility to define the current timestamp as default. Use "CurrentTimestamp" to specify this behavior.

isUnique: If this parameter is set to true, the data of this column must be unique. Example: { "name": "taxNumber", "isUnique": true }. If a combination of columns should be unique, a string can be passed to the parameter to identify this columns: Example:

"columns": [
    { "name": "firstName", "dataType": "NVARCHAR(50)", "isUnique": "uniqueNameAndBirthday" },
    { "name": "lastName", "dataType": "NVARCHAR(50)", "isUnique": "uniqueNameAndBirthday" },
    { "name": "birthDay", "dataType": "DATETIME", "isUnique": "uniqueNameAndBirthday" }
],

In this case the combination of firstName, lastName and birthDay must be unique.

9.5.5 indexes

This array is optional.

"indexes": [
    {
        "name": "IndexName",                         // required
        "disabled": false,                           // optional | Defaults to "false" if the tag is missing.
        "columns": [ "column1", "column5" ],         // required | Array of columns building the index
        "includeColumns": [ "column3", "column2" ]   // optional | Array of columns to add to the leaf level of an index (at example if the column is not in the WHERE clause, but only in the SELECT clause of a query)
    },
    {
        //...
    }
]

9.5.6 foreignKeys

This array is optional.

"foreignKeys": [
    { 
        "disabled": false,                         // optional | Defaults to "false" if the tag is missing.
        "columns": ["column3", "column4"],         // required | Columns of this table that form the key. The column count and types must match the primary key of the reference table.
        "reference": {
            "table": "table1",                     // required | Reference table
            "columns": ["column1", "column2"]      // required | Primary key of the reference table. A foreign key can only be to a primary key (the full primary key, it is not enough to reference only to a part of the primary key.
        },
        "deleteRule": "CASCADE",                   // optional | Define what happens with the data in this table, if the data in the reference table is deleted.
        "updateRule": "SET DEFAULT"                // optional | Define what happens with the data in this table, if the data (primary key) in the reference table is updated.
    },
    {
        //...
    }
]

deleteRule and updateRule

Possible values (case insensitive):

  • "NO ACTION": the SQL Server raises an error and rolls back the action on the row in the parent table, if referencing child rows still exist.
  • "CASCADE": the SQL Server deletes the rows in the child table that is corresponding to the row deleted from the parent table.
  • "SET NULL": the SQL Server sets the rows in the child table to NULL if the corresponding rows in the parent table are deleted/updated. To execute this action, the foreign key columns must be nullable.
  • "SET DEFAULT": the SQL Server sets the rows in the child table to their default values if the corresponding rows in the parent table are deleted/updated. To execute this action, the foreign key columns must have default definitions. Note that a nullable column has a default value of NULL if no default value specified.

Defaults to "NO ACTION" if the tag is missing.

9.5.7 triggers

This array is optional.

"triggers": [
    { 
        "disabled": false,                         // optional | Defaults to "false" if the tag is missing.
        "name": "TriggerName",                     // required | The name of the trigger
        "event": "AFTER INSERT, UPDATE, DELETE",   // required | { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] }
        "doNotRunDeleteScriptOnUpdate": false,     // optional | on UPDATE normally the insertScript runs for the updated row and the deleteScript runs for the old row. If this option is set to true, only the insertScript is executed. Defaults to "false" if the tag is missing.
        "insertScript": [                          // optional | the script to be executed for the inserted row (the new row on INSERT, the updated row on UPDATE). Virtual table = INSERTED | string or array of string (to allow multi line commands)
            "...",
            "..."
        ],
        "deleteScript": [                          // optional | the script to be executed for the deleted row (the deleted row on DELETE, the old row before modification on UPDATE). Virtual table = DELETED | string or array of string (to allow multi line commands)
            "...",
            "..."
        ]
    },
    {
        //...
    }
]

9.5.8 data

This array is optional. This tag allows to insert/update some data by default to the table. There are two sections within this tag:

  • onTableCreation: this data is added only on a new created table. When the table already exists, this section have no effect.
  • checkEveryTime: this data is checked every time the library is executed. If missing, it is added to the table, otherwise the existing data is checked and updated on differences. Only the specified columns are checked/updated, all other columns are not touched. At least the primary key column(s) have to be specified here.
"data": {
    "disabled": false,                                                                     // optional | Defaults to "false" if the tag is missing.
    "onTableCreation": {                                                                   // optional
        "disabled": false,                                                                 // optional | Defaults to "false" if the tag is missing.
        "data": [
            [ { "column": "c1", "value": 1 }, { "column": "c2", "value": "T1" } ],
            [ { "column": "c1", "value": 3 }, { "column": "c2", "value": "T3" } ]
        ]
    },
    "checkEveryTime": {                                                                    // optional
        "disabled": false,                                                                 // optional | Defaults to "false" if the tag is missing.
        "data": [
            [ { "column": "c1", "value": 1 }, { "column": "c2", "value": "Test1" } ],
            [ { "column": "c1", "value": 3 }, { "column": "c2", "value": "Hello1" } ],
            [ { "column": "c1", "value": 5 } ]
        ]
    }

}

9.6 storedProcedures

This array is optional.

"storedProcedures": [
    {
        "disabled": false,                                            // optional | Defaults to "false" if the tag is missing.
        "name": "NameOfTheProcedure",                                 // required
        "parameters": [                                               // optional | Array of parameters. The name of the parameter must begin with an "@". The SQL data type must be specified.
            "@paramName TYPE",
            "@param2 NVARCHAR(50)",
            "@outputParam INT OUTPUT",                                //            example of output parameter
            "@optionalParam INT = 5"                                  //            example of optional parameter
        ],
        "script": "DELETE FROM table6 WHERE c1 = @param2"             // required | The body of the procedure (the script to execute). It can be a string or an array of strings. An array of strings does not mean multiple scripts, but is used only to simulate multi line commands.
    },
    {
        //...
        "script": [                                                   // example of multi line command
            "DELETE FROM table6",
            "WHERE c1 = @param2"
        ]
    }
]

9.7 scripts

This array is optional. This tag allows to implement something like migration scripts.

"scripts": [
    {
        "disabled": false,                                            // optional | Defaults to "false" if the tag is missing.
        "name": "NameOfTheScript",                                    // required
        "minDbVersion": 0,                                            // optional | Run the script if the database version is greater or equal this number. Defaults to "0" if the tag is missing.
        "maxDbVersion": 100,                                          // optional | Run the script if the database version is lesser or equal this number. Defaults to "Infinity" if the tag is missing.
        "trigger": "before the change of the database structure",     // required | Defines when the script should be executed. Possible values are "before the change of the database structure" and "after the change of the database structure". (Case insensitive).
        "executeOnlyOnce": false,                                     // optional | If true, the script is executed only once. After execution the script name will be remembered in the table "_EXECUTED_UNIQUE_EXECUTION_SCRIPTS_". Defaults to "false" if the tag is missing.
        "executeOnlyIfCustomTriggered": false,                        // optional | If true, the script is executed only, when the script name is present in the table "_CUSTOM_TRIGGERED_SCRIPTS_". Defaults to "false" if the tag is missing.
        "returnsModifiedFlag": false,                                 // optional | If true, the script returns "modified = 0" or "modified = 1". This can be done by "SELECT 0 AS modified" or "SELECT 1 AS modified". Defaults to "false" if the tag is missing.
        "script": "DELETE FROM table6 WHERE c1 = 11"                  // required | The script to execute. It can be a string or an array of strings. An array of strings does not mean multiple scripts, but is used only to simulate multi line commands.
    },
    {
        //...
        "script": [                                                   // example of multi line command
            "DELETE FROM table6",
            "WHERE c1 = 11"
        ]
    }
]

The trigger possibilities are:

  • before the change of the database structure: the script is executed before the automatic maintenance of the database starts.
  • after the change of the database structure: the script is executed after the automatic maintenance of the database has finished.

returnsModifiedFlag: If this option is true, the script returns a flag to notify if changes were made or not. This can be done by SELECT 0 AS modified or SELECT 1 AS modified. The purpose of this option is, that in case of true there is written only a logfile-entry, when really an action occured. If set to false, every time a general message is written in the logfile, that the script was executed.

9.7.1 Script example: rename a column

The example shows the renaming of a column. This is only for demonstration, in reality renaming a column can be solved much better with on-board solutions.

JSON database structure before renaming:

{
    "dbVersion": 1,
    "tables": [
        "name": "table1",
        "columns": [
            { "name": "oldName", "dataType": "INT" }
        }
    ]
}

JSON database structure to rename the column:

{
    "dbVersion": 2,
    "tables": [
        "name": "table1",
        "columns": [
            { "name": "newName", "dataType": "INT" }
        }
    ],
    "scripts": [
        "name": "Rename table1.oldName to table1.newName",
        "maxDbVersion": 1,
        "trigger": "before the change of the database structure",
        "script": "EXEC sp_rename 'dbo.table1.oldName', 'newName', 'COLUMN';"
    ]
}

The actual database version is 1. Because of the trigger, the script will be executed before the automatic maintenance. The script renames the column. Now the automatic maintenance would be happen. But now the name of the column already matches the name in the JSON structure, nothing will be modified. The db version in the database is updated to 2. Because of the setting "maxDbVersion": 1, next time the library is executed, the script will not be executed.

10. Archiving

"archiving": {
    "disabled": false,                                        // optional
    "type": "moveToArchiveDatabase",                          // required | possible values: "moveToArchiveDatabase" and "delete"
    "maxAgeInDays": 180,                                      // required | days to keep in the database
    "archiveDatabaseSuffix": "_ARCHIVE",                      // required if type = "moveToArchiveDatabase" | Example: if the main database is named "Test", the archive database has the name "Test_ARCHIVE"
    "treatElementsInOneStep": 10000000,                       // optional | elements archived/deleted in one step (transaction), defauts to 1,000,000 when omitted
    "failIfTableContainsAsToDeleteMarkedColumns": false,      // optional | has only effect if type = "moveToArchiveDatabase"
    "shrinkMainDatabaseAfterArchiving": true,                 // optional

    "archivableTables": [                                     // required
        // ...
    ],
    "actions": [                                              // required
    ]

Has only effect if the method archive() is executed.

In case of type = moveToArchiveDatabase, the library maintain the database structure of the archive database in the same way as for the main database, but without creating primary keys, indexes, foreignKeys, data.

type

  • moveToArchiveDatabase: old elements are moved to the archive database
  • delete: old elements are deleted

failIfTableContainsAsToDeleteMarkedColumns

If due to database changes columns are marked as to delete, archiving the table is not possible without data loss (the as to delete marked columns are not part of the backup). If this setting is true, the archiving fails in this case, otherwise the data loss is ignored and old elements are archived without the as to delete marked columns.

10.1 archivableTables

Array of strings. Here are listed all tables which should be affected by the archiving.

This array has two meanings:

  • it is a double check for the following actions.
  • in case of type = moveToArchiveDatabase, all tables not listed here will be fully copied to the archive database.

10.2 actions

{
    "type": "archive",           // required | possible values: "archive" and "getVariable"
    "executionSequence": 10,     // optional | the sequence of the executions, actions with lower numbers will be executed first
    // ...
},

executionSequence

As long as there are no actions in the customer JSON file (or there are actions in the customer file and it is ok that these actions are executed after the actions of the main file), there is no need to specify the execution sequence. Without these option, the actions are executed in the order of the position in the array. If there are actions in the customer JSON file, and these actions are to be executed at the beginning or in between the other actions, there is the possibility to specify the sequence using this parameter. But if this parameter is used, it must be defined for all actions.

10.2.1 getVariable

There is the possibility to use variables in the actions. The variable maxAge is predefined and represents the date calculated from the parameter maxAgeInDays. To use a variable, the variable name must be written between percent signs. Example: %maxAge%

{
    "type": "getVariable",
    "variableName": "testVar",                  // name of the variable
    "command": [                                // example of a get-variable-command
        "SELECT TOP 1 [time]",                  // the command can be a string or an array of string
        "FROM TestTable",                       // an array is used to simulate multi-line-commands
        "WHERE [time] <= %maxAge%",
        "ORDER BY [time] DESC"
    ],
    "defaultValue": "1980-01-01 00:00:00"       // if the command does not return any value, this value is used. Can be a string or a number. Dates are represented by a string.
}

To the variable is assigned the first column of the first row returned by the SQL command. If no row is returned, defaultValue is used.

10.2.2 archive

{
    "type": "archive",
    "tableName": "TestTable",                                // the table to archive
    "condition": "machineId = 1 AND [time] < %testVar%"      // all elements matching this condition will be archived / deleted
}
{
    "type": "archive",
    "tableName": "TestTable",                                // the table to archive
    "condition": [
        "dbField IN (",                                      // example of multi-line condition 
            "SELECT id FROM OtherTable",                     // with connections over other tables
            "WHERE OtherTable.machineId = 1",
            "AND OtherTable.[time] < %testVar%"
        ")"
    ]
}

It depends on the archive type settings, whether the elements matching the condition are moved to the archive database, or deleted.

11. Cusotmer JSON file

With the script parameter customerDatabaseJsonFile there is the possibility to add some modifications for customers. The customer file extends the main file. The elements in the main file are extended by the customer file, not overwritten (with exceptions, see below).

The format of the file is the same as the normal JSON file (only dbVersion can not be part of the customers file).

The following tags are overwritten by the customer file, if present:

  • recoveryModel
  • settings
  • archiving / disabled
  • archiving / type
  • archiving / maxAgeInDays
  • archiving / archiveDatabaseSuffix
  • archiving / failIfTableContainsAsToDeleteMarkedColumns
  • archiving / shrinkMainDatabaseAfterArchiving

12. Example

{
    "dbVersion": 2,
    "recoveryModel": "simple",

    "settings": {
        "reorganizeIndexesIfFragmentationGreaterThanPercent": 10,
        "rebuildIndexesIfFragmentationGreaterThanPercent": 30
    },

    "tables": [
        {
            "name": "table1",
            "columns": [
                { "name": "id", "dataType": "INT", "isAutoIncrement": true  },
                { "name": "time", "dataType": "TIME(0)", "isNullable": true, "defaultValue": "00:00:00" },
                { "name": "text", "dataType": "NVARCHAR(100)", "defaultValue": "", "isUnique": "uniqueTextAndNumber" },
                { "name": "number", "dataType": "INT", "defaultValue": 0, "isUnique": "uniqueTextAndNumber" }
            ],
            "indexes": [
                {
                    "name": "index1",
                    "columns": [ "time" ],
                    "includeColumns": [ "text", "number" ]
                }
            ]
        },
        {
            "name": "table2",
            "columns": [
                { "name": "id1", "dataType": "INT", "isPrimaryKey": true },
                { "name": "id2", "dataType": "INT", "isPrimaryKey": true },
                { "name": "text", "dataType": "NVARCHAR(125)", "isNullable": true, "defaultValue": "Test" }
            ]
        },
        {
            "name": "table3",
            "columns": [
                { "name": "id", "dataType": "INT", "isAutoIncrement": true  },
                { "name": "referenceColumn1", "dataType": "INT" },
                { "name": "referenceColumn2", "dataType": "INT" }
            ],
            "foreignKeys": [
                { 
                    "columns": [ "referenceColumn1", "referenceColumn2" ],
                    "reference": {
                        "table": "table2",
                        "columns": [ "id1", "id2" ]
                    },
                    "deleteRule": "CASCADE",
                    "updateRule": "CASCADE"
                }
            ]
        },
        {
            "name": "table4",
            "columns" : [
                { "name": "c1", "dataType": "INT", "isPrimaryKey": true },
                { "name": "c2", "dataType": "NVARCHAR(125)" },
                { "name": "c3", "dataType": "NVARCHAR(125)", "defaultValue": "" }
            ],
            "data": {
                "onTableCreation": {
                    "data": [
                        [ { "column": "c1", "value": 1 }, { "column": "c2", "value": "InitialValue1" } ],
                        [ { "column": "c1", "value": 3 }, { "column": "c2", "value": "InitialValue2" } ]
                    ]
                },
                "checkEveryTime": {
                    "data": [
                        [ { "column": "c1", "value": 1 }, { "column": "c2", "value": "UpdatedValue1" } ],
                        [ { "column": "c1", "value": 3 }, { "column": "c2", "value": "UpdatedValue2" } ],
                        [ { "column": "c1", "value": 5 }, { "column": "c2", "value": "NewValue" }, { "column": "c3", "value": "NewValue" } ]
                    ]
                }
            }
        }
    ],
    "scripts": [
        {
            "name": "script1",
            "maxDbVersion": 2,
            "trigger": "after the change of the database structure",
            "script": [
                "DELETE FROM table4",
                "WHERE c1 = 1"
            ]
        }

    ]
}
0.3.14

1 year ago

0.3.13

1 year ago

0.3.12

1 year ago

0.3.11

1 year ago

0.3.10

1 year ago

0.3.8

1 year ago

0.3.9

1 year ago

0.3.6

1 year ago

0.3.7

1 year ago

0.3.5

1 year ago

0.3.4

1 year ago

0.3.3

1 year ago

0.3.2

1 year ago

0.3.1

1 year ago

0.3.0

1 year ago

0.2.0

1 year ago

0.1.64

1 year ago

0.1.63

1 year ago

0.1.62

2 years ago

0.1.61

2 years ago

0.1.60

2 years ago

0.1.52

2 years ago

0.1.53

2 years ago

0.1.54

2 years ago

0.1.55

2 years ago

0.1.56

2 years ago

0.1.57

2 years ago

0.1.58

2 years ago

0.1.59

2 years ago

0.1.50

2 years ago

0.1.51

2 years ago

0.1.49

2 years ago

0.1.45

2 years ago

0.1.46

2 years ago

0.1.47

2 years ago

0.1.48

2 years ago

0.1.44

2 years ago

0.1.43

2 years ago

0.1.42

2 years ago

0.1.41

2 years ago

0.1.40

2 years ago

0.1.39

2 years ago

0.1.38

2 years ago

0.1.37

2 years ago

0.1.36

2 years ago

0.1.35

2 years ago

0.1.34

2 years ago

0.1.33

2 years ago

0.1.31

2 years ago

0.1.30

2 years ago

0.1.29

2 years ago

0.1.28

2 years ago

0.1.27

2 years ago

0.1.26

2 years ago

0.1.25

2 years ago

0.1.24

2 years ago

0.1.23

2 years ago

0.1.22

2 years ago

0.1.21

2 years ago

0.1.20

2 years ago

0.1.19

2 years ago

0.1.18

2 years ago

0.1.17

2 years ago

0.1.16

2 years ago

0.1.15

2 years ago

0.1.14

2 years ago

0.1.13

2 years ago

0.1.12

2 years ago

0.1.11

2 years ago

0.1.10

2 years ago

0.1.9

2 years ago

0.1.8

2 years ago

0.1.7

2 years ago

0.1.6

2 years ago

0.1.5

2 years ago

0.1.4

2 years ago

0.1.3

2 years ago

0.1.2

2 years ago

0.1.1

2 years ago

0.1.0

2 years ago