In response to Glenn Maynard <glenn@xxxxxxxx>: > On Thu, Feb 10, 2011 at 6:44 PM, Bill Moran <wmoran@xxxxxxxxxxxxxxxxx>wrote: > > > dbsteward can do downgrades ... you just feed it the old schema and > > the new schema in reverse of how you'd do an upgrade ;) > > > > Oh, also, it allows us to do installation-specific overrides. We use > > this ONLY for DML for lookup lists where some clients have slightly > > different names for things than others. In theory, it could do DDL > > overrides as well, but we decided on a policy of not utilizing that > > because we wanted the schemas to be consistent on all our installs. > > > > What about upgrades that can't be derived directly from an inspection of the > schema? Some examples: > > - Adding a NOT NULL constraint (without adding a DEFAULT). You often want > to precede this with filling in any existing NULL values, so the new > constraint doesn't fail. > - Updating triggers, functions and their effects. For example, when I have > an FTS index with a trigger to update an index column, and I change the > underlying trigger, I often do something like "UPDATE table SET column = > column", to cause all of the update triggers to fire and recalculate the > index columns. > - Creating a new column based on an old one, and removing the old one; eg. > add a column "n", run "UPDATE ... SET n = i*j * 2", and then drop the old > columns "i" and "j". > - Updating data from an external source, such as ORM model code; for > example, if you have a table representing external files, an update may want > to calculate and update the SHA-1 of each file. > - For efficiency, dropping a specific index while making a large update, and > then recreating the index. > > In my experience, while generating schema updates automatically is handy, it > tends to make nontrivial database updates more complicated. These sorts of > things happen often and are an integral part of a database update, so I'm > just curious how/if you deal with them. > > I've used Ruby's migrations, and for my Django databases I use my own > migration system which is based in principle off of it: create scripts to > migrate the database from version X to X+1 and X-1, and upgrade or downgrade > by running the appropriate scripts in sequence. > > It's not ideal, since it can't generate a database at a specific version > directly; it always has to run through the entire sequence of migrations to > the version you want, and the migrations accumulate. However, it can handle > whatever arbitrary steps are needed to update a database, and I don't need > to test updates from every version to every other version. You're correct (based on our experience over the past few years). The big caveat is that 99.9% of the database changes don't fall into those "nontrivial" categories, and dbsteward makes those 99.9% of the changes easy to do, reliable to reproduce, and easy to track. We've added some stuff to handle the other .1% as well, like <beforeUpdateSQL> and <afterUpdateSQL> where you can put an arbitrary SQL strings to be run before or after the remainder of the automatic stuff is done. We probably haven't seen every circumstance that needs a special handling, but we've already struggled through a bunch. All this is part of the reason we're pushing to get this stuff open- sourced. We feel like we've got something that's pretty far along, and we feel that community involvement will help enhance things. -- Bill Moran http://www.potentialtech.com http://people.collaborativefusion.com/~wmoran/ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general