@bradleyayers/node-pg-migrate v2.3.0
pg-migrate
Node.js database migration management built exclusively for postgres.
Installation
$ npm install node-pg-migrateInstalling this module adds a runnable file into your node_modules/.bin directory. If installed globally (with the -g option), you can run pg-migrate and if not, you can run ./node_modules/.bin/pg-migrate
Usage
You can specify your database connection information using config.
// config/default.json
{
"db": "postgres://postgres:password@localhost:5432/name"
}or
// config/default.json
{
"db": {
"user": "postgres",
"password": "",
"host": "localhost",
"port": 5432,
"name": "name"
}
}You could also specify your database url by setting the environment variable DATABASE_URL.
DATABASE_URL=postgres://postgres@localhost/name node-pg-migrateYou can specify custom JSON file with config (format is same as for db entry of config file)
If a .env file exists, it will be loaded using dotenv (if installed) when running the pg-migrate binary.
Depending on your project's setup, it may make sense to write some custom grunt tasks that set this env var and run your migration commands. More on that below.
The following are the available commands:
pg-migrate create {migration-name}- creates a new migration file with the name you give it. Spaces and underscores will be replaced by dashes and a timestamp is prepended to your file name.pg-migrate up- runs all up migrations from the current state.pg-migrate up {N}- runs N up migrations from the current state.pg-migrate down- runs a single down migration.pg-migrate down {N}- runs N down migrations from the current state.pg-migrate unlock- unlocks migrations (if previous up/down migration failed and was not automatically unlocked).
Configuration
You can adjust defaults by passing arguments to pg-migrate:
config-file(f) - The file with migration JSON config (defaults to undefined)schema(s) - The schema on which migration will be run (defaults topublic)database-url-var(d) - Name of env variable with database url string (defaults toDATABASE_URL)migrations-dir(m) - The directory containing your migration files (defaults tomigrations)migrations-schema- The schema storing table which migrations have been run (defaults to same value asschema)migrations-table(t) - The table storing which migrations have been run (defaults topgmigrations)check-order- Check order of migrations before running them (defaults totrue, to switch it off supply--no-check-orderon command line). (There should be no migration with timestamp lesser than last run migration.)
See all by running pg-migrate --help.
Most of configuration options can be also specified in config file.
For SSL connection to DB you can set PGSSLMODE environment variable to value from list other then disable.
e.g. PGSSLMODE=require pg-migrate up (pg will take it into account)
JSON Configuration
You can use config or your own json file with configuration (config-file command line option).
Available options are:
migrations-dir,migrations-schema,migrations-table,check-order- same as aboveeither
urloruser,password,host(defaults to localhost),port(defaults to 5432),name- for connection detailstype-shorthands- for column type shorthandsYou can specify custom types which will be expanded to column definition (e.g. for
module.exports = { "type-shorthands": { id: { type: 'uuid', primaryKey: true }, createdAt: { type: 'timestamp', notNull: true, default: new require('node-pg-migrate').PgLiteral('current_timestamp') } } }it will inpgm.createTable('test', { id: 'id', createdAt: 'createdAt' });produce SQLCREATE TABLE "test" ("id" uuid PRIMARY KEY, "createdAt" timestamp DEFAULT current_timestamp NOT NULL);).
Locking
pg-migrate automatically checks if no other migration is running. To do so, it locks the migration table and enters comment there.
There are other options how to do it, but I choose this one (see #88).
In some circumstances it is possible that lock will not be released (Error message - Error: Unable to fetch migrations: Error: Another migration is already running).
In that case you need to run pg-migrate unlock to release the lock again.
Defining Migrations
When you run pg-migrate create a new migration file is created that looks like this:
exports.up = function up(pgm) {
}
exports.down = function down(pgm) {
}pgm is a helper object that provides migration operations and run is the callback to call when you are done.
IMPORTANT
Calling the migration functions on pgm doesn't actually migrate your database. These functions just add sql commands to a stack that is run.
Automatic Down Migrations
If exports.down is not present in a migration, pg-migrate will try to automatically infer the operations that make up the down migration by reversing the operations of the up migration. Only some operations have automatically inferrable equivalents (details below on each operation). Sometimes, migrations are destructive and cannot be rolled back. In this case, you can set exports.down = false to tell pg-migrate that the down migration is impossible.
Async Migrations
In some cases, you may want to perform some async operation during a migration, for example fetching some information from an external server, or inserting some data into the database. To make a migration block operate in async mode, just add another callback argument to the function signature. However, be aware that NONE of the pgm operations will be executed until run() is called. Here's an example:
exports.up = function up(pgm, run) {
doSomethingAsync(function() {
run();
});
}Another way how to perform some async operation is to return Promise from up or down function. Example:
exports.up = function(pgm) {
return new Promise(resolve => {
// doSomethingAsync
resolve();
});
}Migration methods
The pgm object that is passed to each up/down block has many different operations available. Each operation is simply a function that generates some sql and stores it in the current pgm context.
By default, each migration will be run in a transaction. To disable transactions for a specific migration, call pgm.noTransaction()
This is required for some SQL operations that cannot be run within a transaction. It should be used carefully.
Creating & Altering Tables / Columns
pgm.createTable( tablename, columns, options )
Create a new table - postgres docs
Arguments:
tablenamestring - name for the new tablecolumnsobject - column names / options -- see column definitions sectionoptionsobject - table options (optional)inheritsstring - table to inherit from
Reverse Operation: dropTable
pgm.dropTable( tablename )
Drop existing table - postgres docs
Arguments:
tablenamestring - name of the table to drop
pgm.renameTable( tablename, new_tablename )
Rename a table - postgres docs
Arguments:
Reverse Operation: same operation in opposite direction
pgm.addColumns( tablename, new_columns )
Add columns to an existing table - postgres docs
Arguments:
tablenamestring - name of the table to alternew_columnsobject - column names / options -- see column definitions section
Aliases: addColumn
Reverse Operation: dropColumns
pgm.dropColumns( tablename, columns )
Drop columns from a table - postgres docs
Arguments:
tablenamestring - name of the table to altercolumnsarray of strings or object - columns to drop (if object, uses keys)
Aliases: dropColumn
pgm.renameColumn( tablename, old_column_name, new_column_name )
Rename a column - postgres docs
Arguments:
tablenamestring - name of the table to alterold_column_namestring - current column namenew_column_namestring - new column name
Reverse Operation: same operation in opposite direction
pgm.alterColumn( tablename, column_name, column_options )
Alter a column (default value, type, allow null) - postgres docs
Arguments:
tablenamestring - name of the table to altercolumn_namestring - column to altercolumn_optionsobject - optional new column optionsdefaultstring or null - null, stringtypestring - new datatypenotNullboolean - sets NOT NULL if true
pgm.addConstraint( tablename, constraint_name, expression )
Add a named column constraint - postgres docs
Arguments:
tablenamestring - name of the table to alterconstraint_namestring - name for the constraintexpressionstring - constraint expression (raw sql)
Aliases: createConstraint
Reverse Operation: dropConstraint
pgm.dropConstraint( tablename, constraint_name )
Drop a named column constraint - postgres docs
Arguments:
pgm.createIndex( tablename, columns, options )
Create a new index - postgres docs
Arguments:
tablenamestring - name of the table to altercolumnsstring or array of strings - columns to add to the indexoptionsindex options - optional options:
Aliases: addIndex
Reverse Operation: dropIndex
pgm.dropIndex( tablename, columns, options )
Drop an index - postgres docs
Arguments:
tablenamestring - name of the table to altercolumnsstring or array of strings - column names, used only to infer an index nameoptionsindex options - optional options:namestring - name of the index to drop
Miscellaneous Operations
pgm.sql( sql )
Run raw sql -- with some optional very basic mustache templating
Arguments:
pgm.func( sql )
Inserts raw string, which is not escaped
e.g. pgm.func('CURRENT_TIMESTAMP') to use in default option for column definition
Arguments:
sqlstring - string to not be escaped
pgm.createExtension( extension )
Install postgres extension(s) - postgres docs
Arguments:
extensionstring or array of strings - name(s) of extensions to install
Aliases: addExtension
Reverse Operation: dropExtension
pgm.dropExtension( extension )
Un-install postgres extension(s) - postgres docs
Arguments:
extensionstring or array of strings - name(s) of extensions to install
pgm.createType( type_name, values )
Create a new data type - postgres docs
Arguments:
type_namestring - name of the new typevaluesarray of strings or object if an array the contents are possible values for an enum type, if an object names and types for a composite type
Aliases: addType
Reverse Operation: dropType
pgm.dropType( type_name )
Drop a custom data type - postgres docs
Arguments:
type_namestring - name of the new type
Column Definitions
The createTable and addColumns methods both take a columns argument that specifies column names and options. It is a object (key/value) where each key is the name of the column, and the value is another object that defines the options for the column.
typestring - data type (use normal postgres types)uniqueboolean - set to true to add a unique constraint on this columnprimaryKeyboolean - set to true to make this column the primary keynotNullboolean - set to true to make this column not nullcheckstring - sql for a check constraint for this columnreferencesstring - a table name that this column is a foreign key toonDeletestring - adds ON DELETE constraint for a reference columnonUpdatestring - adds ON UPDATE constraint for a reference column
Data types & Convenience Shorthand
Data type strings will be passed through directly to postgres, so write types as you would if you were writing the queries by hand.
There are some aliases on types to make things more foolproof: (int, string, float, double, datetime, bool)
There is a shorthand to pass only the type instead of an options object:
pgm.addColumns('myTable', { age: 'integer' });
is equivalent to
pgm.addColumns('myTable', { age: { type: 'integer' } });
There is a shorthand for normal auto-increment IDs:
pgm.addColumns('myTable', { id: 'id' });
is equivalent to
pgm.addColumns('myTable', { id: { type: 'serial', primaryKey: true } });
Using schemas
Instead of passing string as name to pgm functions, you can pass an object with keys schema and name. E.g.
pgm.createTable( {schema: 'my_schema', name: 'my_table_name'}, {id: 'serial'});
will generate
CREATE TABLE "my_schema"."my_table_name" (
"id" serial
);Explanation & Goals
Why only Postgres? - By writing this migration tool specifically for postgres instead of accommadating many databases, we can actually provide a full featured tool that is much simpler to use and maintain. I was tired of using crippled database tools just in case one day we switch our database.
Async / Sync - Everything is async in node, and that's great, but a migration tool should really just be a fancy wrapper that generates SQL. Most other migration tools force you to bring in control flow libraries or wrap everything in callbacks as soon as you want to do more than a single operation in a migration. Plus by building up a stack of operations, we can automatically infer down migrations (sometimes) to save even more time.
Naming / Raw Sql - Many tools force you to use their constants to do things like specify data types. Again, this tool should be a fancy wrapper that generates SQL, so whenever possible, it should just pass through user values directly to the SQL. The hard part is remembering the syntax of the specific operation, not remembering how to type "timestamp"!
8 years ago