-----BEGIN PGP SIGNED MESSAGE----- Hash: RIPEMD160 Jay Levitt spoke: >>Greg Sabino Mullane wrote: >> update pg_database set datallowconn = false where datname = 'foobar'; >That's perfect - thanks. Now I can (I think) do this: > > pg_restore -d rails_dev_new > [wait] > psql template1 > update pg_database set datallowconn = false where datname = 'rails_dev'; > select pg_terminate_backend(procpid) from pg_stat_activity where \ > datname='rails_dev'; > begin; > alter database rails_dev rename to rails_dev_old; > alter database rails_dev_new rename to rails_dev; > commit; > drop database rails_dev_old; > \q Yes, but if that's truly the process, you might as well save some steps and just drop the existing one and do a single rename: select pg_terminate_backend(procpid) from pg_stat_activity where \ datname='rails_dev'; drop database rails_dev; alter database rails_dev_new rename to rails_dev; \q a developer will find they have something on there they need about two minutes after you drop it. :) Space permitting, of course. > Bucardo looks great for replication, but it mentions that it won't do DDL. I > think that means if someone added a new column to production yesterday, but > I haven't run that migration yet locally, Bucardo will choke when it tries > to sync.. ya? (Though the easy workaround is run the darn migration first.) Yes - the canonical way is to get the schema in sync first, then let Bucardo handle the data. > By "snapshots", do you mean filesystem-level snapshots like XFS or LVM? OS X > has no support for either, sadly. Yes, that's exactly what I mean. We have clients using that with great success. Simply make a snapshot of the production database volumes, mount it on the dev box, and go. > That sounds like I couldn't use production log-shipping to sync test > databases. Unless that doc is not *quite* true, and I could somehow: > ... Well, you can use Postgres' PITR (point in time recovery) aka warm standby to create standby slaves identical to the master, and then at some point in time flip them to become live, independent databases that can be modified. The downside is that you then have to create a new base backup, which means rsyncing the entire data directory to a new slave/standby box. However, this might be worth it as you can frontload the time spent doing so - once it is rsynced and the standby is up and running (and receiving data from the master), swtiching it from standby to nomal mode (and thus creating a perfect clone of production at that moment) is pretty much instant. So the process would be: * Turn on archive_command on prod, point it to the dev box * Create a base backup, ship the data dir to the dev box, start up the db * In the AM, tell the dev box to go into recovery mode. Tell the prod box to stop trying to ship logs to it * Rearrange some dirs on the dev box, and start over again by making a new base backup, rsyncing data dir, etc. * In the AM, stop the old dev database. Bring the new one up (recover it) You could even make less frequent base backups if you keep enough logs around to play forward through more than a days worth of logs. > Those are good ideas; I bet the pg_restore can be much faster with giant > checkpoints, lots of memory, etc. I also see Bucardo's split_pg_dump, which > would probably help - no point creating indexes on-the-fly. Well, if they are needed on prod, you probably want them on dev :) - -- Greg Sabino Mullane greg@xxxxxxxxxxxx End Point Corporation http://www.endpoint.com/ PGP Key: 0x14964AC8 201201021458 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -----BEGIN PGP SIGNATURE----- iEYEAREDAAYFAk8CDIIACgkQvJuQZxSWSsj1cQCfdJtmW/fmgPDRYk2esngyng7a WZMAnjafyd+EDFGVzPA/dPUUqhks9Qkb =HJak -----END PGP SIGNATURE----- -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general