> Could somebody explain me, or point me to a resource where I can find > out what is the recommended practice when a live db needs to be replaced > with a new version of it that has a slightly different structure? Our development infrastructure includes a development data base cluster, with one data base per developer, and a staging data base with a largish deployment of the current production data base version. The developers are free to test whatever data base modifications they need on their private development data base. We have a "setup_db" script, which creates the data base structure + initial data. The script is based on an XML file which is processed by XSLT to generate the actual schema for Oracle/Postgres. So the developers usually recreate the DB from scratch on any modification of the db schema. We test both on postgres/oracle, the application must support both (it can even connect to both in parallel). When a release is approaching, we make a diff of the currently deployed production schema and the to be deployed new release schema. The differences go to a db migration script (separately for Oracle/Postgres), broken in a few steps: - init: create new DB structures (create new tables, add new db fields to existing tables, insert new data, etc.). After this step the DB can be used both by the old version and the new one of the application. After this step we restart the application with the new version deployed (actually we have usually another cluster of app-servers connecting to the same data base, running the new version); - cleanup: delete old structures, add the new constraints which could not be added because compatibility problems with the old version; All can be done using alter table and the like. When some field changes it's structure/meaning, we actually create a new field, and copy over the old one with the needed conversions. The new version will use the new field, old version the old field... we do make all possible to be able to run both versions after the init step. We actually have a middle step, because we have our systems (logically) partitioned in sub-systems, and we usually migrate them separately, so we have a "migrate sub-system" step too. So we actually can have 2 versions of the software connecting in parallel to the DB, each having a different set of sub-systems to run. After the migration scripts are created, we test them on the staging system (our QA does this, and they are more than happy to discover any mistakes in the process ;-) We also have a quite extensive unit/integration test suite which catches quite a lot of potential regression cases, this makes any change easier to prove not breaking existing functionality, and in particular data base changes are also easier to add with enough confidence it will work. HTH, Csaba.