Writing Database Migrations
As part of our work on RITA, we will need to make schema changes (such as creating tables and adding columns) to live production databases during software upgrades without losing data. Here I will show how migrations can be used to implement these changes. Although aimed at Migrate4J users, some of this applies to Rails Migrations as well.
We use Migrate4J to implement database migrations in this Java application. This requires us to write Java code to migrate up to, and down from, each specific database version, by making the required database changes: adding tables and fields, changing field names and types, and modifying data.
However, in our team the database designer is not the person writing these migrations. The designer is working on his copy of the database design, keeping in mind backwards compatibility with the LCTT Access database, and giving me Postgres schema dumps. I have to compare these dumps to identify what has changed, and write the migration code.
What Changed?
First of all, how does one compare dumps? I found Subversion and Diff to be very helpful. We keep the currently-implemented schema checked into Subversion here as a Postgres dump. When I receive a new one, I replace this file, but don't immediately check it in. I can use thesvn diff
command, or the Subclipse plugin's Compare With feature, to see all the changes since the last revision.
Unfortunately Postgres dumps contain some lines that change every time and which aren't helpful to me, so after I update the dump, I run a command to remove them:
sed -i.orig -e '/^-- TOC entry/d' -e '/^-- Dependencies:/d' master-schema-from-aaron.sql
And then show the differences:
svn diff --diff-cmd=diff -x "-u -F TABLE" master-schema-from-aaron.sql > master-schema-from-aaron.diff
which produces a file that I can load into a syntax highlighting editor (I often pipe it into less
instead), and which looks like this:
@@ -554,7 +596,7 @@ -- Name: bundle_type_group; Type: TABLE; CREATE TABLE bundle_type_group ( id integer NOT NULL, description character varying(255) NOT NULL, - is_qty_allowed smallint, + is_qty_allowed smallint NOT NULL, record_version bigint NOT NULL, is_deleted smallint NOT NULL );This is an extract from a unified diff. The first line, starting with
@@
, is a header that begins a new section: a block of changed lines, also called a changed hunk or chunk. It includes line numbers from the old and new dump files. It shows three lines of unchanged context above and below the lines that changed.
In this case the line CREATE TABLE bundle_type_group
identifies the table being modified, but sometimes the context may not be enough. The last line containing the word TABLE
is shown in the header, and normally this helps to identify the table as well.
So this section represents a change to the bundle_type_group
table. What changed? A line has been deleted from the dump, and a line has been added. The deleted line is prefixed with -
(minus) in the difference file, and the added line is prefixed with +
(plus). These lines represent columns in the table.
In this case, the column removed and the column added are both called is_qty_allowed
. Because the name is the same, but the types are different, this almost certainly represents a type change to an existing column. If the names were different but the types were the same, it probably represents a renamed column, and if the names and types both differ, it's probably a deletion of one column and creation of another, discarding the old contents of the column.
It's worth discussing any unclear changes with the database administrator to be sure exactly what needs to be done. Sometimes there will be data-only migration changes that don't appear in the schema at all. For example you might decide one day that all people currently called John in the database should now be called Jean, or you might need to add a row to a system table. These can also be done with Migrate4J, but they are not structural (schema) changes.
Creating a New Migration
Assuming that you already have migrations configured in your application, you will have a migration package, where all the classes are named Migration_number. In our case, the migration package is org.wfp.rita.db.migrations. Identify the next migration number in this package, which is usually one higher than the highest number present. Create a class in the package with this name, using this template:
package org.wfp.rita.db.migrations;Now you can write code to implement the database changes (both schema and data) that you discovered earlier. Each new change is part of an upward migration, and the code that implements it should go into the/* cleaner sources: */ import static com.eroi.migrate.Execute.*; import static com.eroi.migrate.Define.*;
public class Migration_2 implements Migration { public void up() { }
public void down() { } }
up
method.
It's important to be able to reverse changes as well. If a schema update fails, you may want to back down to a previous schema, fix the problem that caused it to fail, and try to update again. The code to reverse the change, which is called a downward migration, goes into the down()
method.
Note that most migrations lose data in either the forward or the reverse direction (up or down respectively), so you would be well advised to make an automated backup of the database before applying any migrations, in addition to your standard database backup procedures.
Creating Tables
TheExecute.createTable()
method takes the table name, and an array of Columns. You can create a new Column
with one of these constructors:
- new Column(String columnName, int columnType)
- new Column(String columnName, int columnType, int length, boolean primaryKey, boolean nullable, Object defaultValue, boolean autoincrement)
- columnType
- The type of the column, from java.sql.Types, e.g. Types.INTEGER, Types.FLOAT, Types.VARCHAR.
- length
- The length of CHAR and VARCHAR columns. The length of all other column types, particularly DECIMAL, must be specified in another way, see below.
- primaryKey
- True if this column should be part of the primary key, or false otherwise (the default). You can have any number of columns in the primary key, and RITA uses composite primary keys extensively.
- nullable
- True if this column should be allowed to contain NULL values, and false otherwise.
- defaultValue
- The default value for new rows. If you set this to null, and the column is not nullable, then a value must be supplied for each record inserted.
- autoincrement
- True if the column should contain automatically-assigned numbers, using the AUTO_INCREMENT attribute in MySQL, or IDENTITY columns or sequences on databases that support them.
persons
, with three columns:
ID
- an automatically-assigned integer primary key
fish
- a float
rope
- a string, 40 characters long, not nullable, defaulting to
nylon
up
migration:
Execute.createTable(new Table("persons", new Column[]{ new Column("id", Types.INTEGER, -1, true, false, null, true), new Column("fish", Types.FLOAT), new Column("rope", Types.VARCHAR, 40, false, false, "nylon", false) }));Unfortunately this syntax doesn't allow specifying unique keys, indexes, foreign keys, and precision and scale of decimal columns when the table is created. There is another, shorter syntax which allows specifying the precision and scale:
createTable(table("persons", column("id", INTEGER, notnull(), primarykey()), column("fish", NUMERIC, precision(8), scale(5)), column("rope", VARCHAR, length(40), notnull(), defaultValue("nylon")), ));If that still seems like too much work, and you have a database dump of your new schema, have a look at generating from Postgres dumps below.
The reverse, which you would normally put into the down()
method, is simply to drop the table.
Dropping Tables
Dropping a table is as simple as:Execute.dropTable("persons");Note that all data in the table will be lost. To recreate the empty table structure in the reverse migration, just create it again.
Adding Columns
To add anINTEGER
column called hairs
to the persons
table, you would add the following code to the up()
method:
Execute.addColumn(new Column("hairs", Types.INTEGER), "persons");The addColumn method takes a
Column
object, which you can create using either of the methods new Column(...)
or column(...)
described under creating tables above. The column(...)
method is shorter, and the only way to specify the scale and precision of decimal (NUMERIC) columns.
If the change is adding a column, the reverse is to remove the column again, which belongs in the down()
method:
Execute.dropColumn("hairs", "persons");Note that your newly added column will contain default values for all records. If you know what the values should be, or can recreate them using a query, you could execute SQL queries to populate it. Also note that if you migrate down past this version, the column will be dropped and all data contained in it will be lost.
Removing Columns
This is the exact opposite of Adding Columns above. Put thedropColumn()
in the up
migration, and the addColumn()
in the down
migration.
Note that migrating down past this migration will not restore the data that was in your column before. If you know what it was, or can recreate it using a query, you could reinsert it using SQL queries.
Renaming Columns
Changing the name of a column does not lose any data. For example, we can rename the column calledfish
to hats
in the persons
table, and hope that people don't try to wear their pet haddock:
Execute.renameColumn("fish", "hats", "persons");The
down()
migration trivially renames the column from the new name back to the old name.
Indexes
You can add indexes to columns, both to improve search performance, and to enforce the uniqueness of values in certain columns. TheaddIndex()
method takes an Index
object, which you can either create by calling its constructor, or more concisely by calling index()
or uniqueIndex()
. Both take the same parameters:
index(String indexName, String tableName, String... columnNames)
indexName
is the name of the index, which can be null
to generate a name automatically. However, such indexes cannot reliably be removed, so I recommend always naming your indexes explicitly. tableName
is the name of the table that the index will be applied to, and columnNames
is a list of names of columns that will be included in the index.
For example, to uniquely index the fish
and rope
columns in the persons
table:
Execute.addIndex(uniqueIndex("uk_fish_rope", "persons", "fish", "rope"));You can drop an index, for example for downward migration, using the index name and the table name:
Execute.dropIndex("uk_fish_rope", "persons");
Foreign Keys
Foreign keys link one table to another, to enforce referential integrity between tables. You can create them withExecute.addForeignKey()
, which takes a ForeignKey
object. There are four ways to construct a ForeignKey:
- ForeignKey(String name, String parentTable, String parentColumn, String childTable, String childColumn)
- ForeignKey(String name, String parentTable, String parentColumn, String childTable, String childColumn, CascadeRule deleteRule, CascadeRule updateRule)
- ForeignKey(String name, String parentTable, String[] parentColumns, String childTable, String[] childColumns)
- ForeignKey(String name, String parentTable, String[] parentColumns, String childTable, String[] childColumns, CascadeRule cascadeDeleteRule, CascadeRule cascadeUpdateRule)
For example, to force a person's fish_id
column to point to the ID of a record in the fish
table, you could use this:
Execute.addForeignKey(new ForeignKey("fk_persons_fish", "persons", "fish_id", "fish", "id"));You can drop a foreign key, for example for downward migration, using the key name and the child (referenced) table name:
Execute.dropIndex("fk_persons_fish", "fish");
Executing Queries
You can execute any arbitrary SQL statement, for example to insert rows into a newly created table or populate a newly created column:Execute.executeStatement(Configure.getConnection(), "INSERT INTO users SET name = 'fred', password = 'flintstone'"); Execute.executeStatement(Configure.getConnection(), "UPDATE users SET age = 42 WHERE name = 'barney'");Although data modification language is much more standard across databases than data definition language, it's important to be careful only to use ANSI SQL in such statements if cross-database compatibility is important for your application (or might become important in future).
Generating Automatically
If you already have a table structure in a database somewhere, for example if you are retrofitting migrations to an existing project, or if you prefer using GUI tools to design databases, and to reduce the risk of errors, you may want to generate the migration code automatically.I wrote a script to create Migrate4J migrations automatically from Postgres database dumps. It's not perfect, it probably only handles the SQL that we actually use, and it's not well tested, but it may help you. Just run it with the name of the exported schema dump file as its parameter, and it will generate Java code on the standard output, that you can copy and paste into a Java source file.
If the schema will continue to change, and you want help with creating new table definitions in future, you can save the generated output to a file under version control. When you need to generate migration code for a new schema, just overwrite that file, and use svn diff
as before to show the differences. They will now be expressed in Java code, which is easier to copy and paste into a new migration.
Applying Manually
In Eclipse, with amigrate4j.properties
file on your classpath, you should be able to open the Migrate4J JAR file in Eclipse, expand the com.eroi.migrate
package, right-click on Engine
and choose "Run As/Java Application".
Applying Programmatically
As we are using Hibernate, we get a database connection using its Work class, and use it to invoke the migration engine:// set up Migration schema and run all migrations m_Session.doWork(new Work() { public void execute(Connection connection) throws SQLException { Configure.configure(connection, "org.wfp.rita.db.migrations"); Engine.migrate(); } });
Version Control
If I don't check in the master schema changes immediately, when does it happen? I try to wait until I have all the schema changes implemented in Hibernate annotations and migrations, and run as many tests as I feel the need to run, before checking everything in.
This ensures that the documentation checked in is consistent with the code at that point in time, that I can see the changes to the SQL dump, the Hibernate mappings and the migrations for a single schema update and compare them side-by-side, and reduces the risk of checking in broken code.