Aug. 31, 2007, 11:54 a.m.

Imperative Programming is Rotting People's Brains

We have a problem with our database schema upgrade mechanism at work. The problem is that we don't test the migration script we generate against the databases for which it's targetted.

The solution is obvious: Change the way we generate this script.

At least, this is the argument I keep getting. Let me fill in some context and see if I can make sense of all this. First, let me list the goals of this mechanism:

  1. Generate an upgrade script for our application to go from version X to version Y
  2. Create incremental scripts for dev and test systems

This is a fairly common problem with a wide variety of solutions, many of which are far more complex than they need to be for the benefit they provide. We moved past most of these arguments by getting people to accept that a symmetric process enabling downgrades is impossible and it really, really doesn't matter that our script creates a table and then alters it sixteen times because the alternative is for a human to do a lot of work to save the computer from doing a little. So we check in little pieces of work that get concatenated into a big script.

The only thing it really came down to was how we check these scripts in. One proposal was to have the scripts be numbered and just increment the number every time you write one. You can then have a table with a bunch of numbers in it indicating which scripts have been applied for meeting goal #2. This sort of falls apart at the point where you have two people doing db scripts at the same time, though.

My suggestion is ostensibly more complicated, but in practice has been much easier for me (and I've done twice as many commits as the next person in this area): script dependencies. I can declare this a good idea because I stole it from NetBSD (although I wrote the ordering utility myself and the semantics are slightly different). Basically, every script declares that it provides something, and states what it requires, and then a total order is computed using a topological sort and a script is made available. Goal #1 is met, and goal #2 may be met by having a table of what tokens have been provided (e.g. user+home_page or something) and excluding any scripts that provide any of these tokens.

Conceptually, you can consider these the same, except in my approach, you explicitly list your dependencies, and in the sequencing approach, you implicitly assume that everything you do requires everything anyone's done before. The difference is when you have more than one person working on something. Two people check in script 53. Someone loses. This is when the well, let's just put it all in one big file or well, renumber one of them arguments start (either of which makes goal #2 much more difficult).

While there are some complaints that one must know what dependencies are needed (which I find completely bogus, as that effort is negligible compared to the actual contents), the larger complaint seems to be that people don't get to specifically define the order in which changes occur. This is the turning point of this entire ramble.

I've been pondering this for a couple of days now, and it seems that I've taken imperative programmers out of their comfort zone. Why would anyone want to spend time on the least interesting part of the application? If there is a script A and a script B that create tables A and B respectively, and there is no relationship whatsoever between A and B, it simply doesn't matter in which order they're invoked. No amount of fear will convince me otherwise.

So, back to our actual problems. Your hand doesn't need to have all of its fingers to count the number of times we've had problems with this process due to unstated or incorrect dependencies. We have automated tools in place for generating the script for #1 (from buildbot against every change) and #2 (dynamically determining the state of a DB and building a custom custom script). Both of these processes at least confirm that your dependencies make sense. However, there is no testing, so we don't confirm that we're doing is considered absolutely correct when applied to a real DB until someone manually does so.

In preparing our staging databases for an upgrade of our new app, we ran into a small number of problems:

  1. Unexpected data problems on this new database
  2. Some script was modifying too much data and taking too long.
  3. Some script removed data people wanted.

Nobody complained too much about #1, although that one really hurt the most. I didn't realize that I even had any respect for mysql before starting this job and working with it regularly and finding that I was actually losing some. For #2, people changed some stuff around to not have to make these large changes.

But for #3, the debate started up yet again. OMG! The records are all gone! We need to use sequential scripts!! OK, this came down to two scripts with two possible orders: 1) column is added to DB and another script removes everything where that column is null or 2) other script fails because column doesn't exist.

As far as I can tell, our only problem is that we don't test this stuff regularly. This really shouldn't surprise anyone, but I've been surprised to find how difficult it is to get people to focus on what's actually wrong. Inevitably, I started to end all of these conversations with the following words: If your proposal doesn't end with ...and you don't have to test it, then I don't want to hear it.

blog comments powered by Disqus