Hey guys, we just moved our system to Amazon's EC2 service. I'm a bit paranoid about backups, and this environment is very different than our previous environment. I was hoping you guys could point out any major flaws in our backup strategy that I may have missed. A few assumptions: 1. It's OK if we lose a few seconds (or even minutes) of transactions should one of our primary databases crash. 2. It's unlikely we'll need to load a backup that's more than a few days old. Here's what we're currently doing: Primary database ships WAL files to S3. Snapshot primary database to tar file. Upload tar file to S3. Create secondary database from tar file on S3. Put secondary database into continuous recovery mode, pulling wal files from S3. Every night on secondary database: * shutdown postgres * unmount ebs volume that contains postgres data * create new snapshot of ebs volume * remount ebs volume * restart postgres I manually delete older log files and snapshots once I've verified that a newer snapshot can be brought up as an active database and have run a few tests on it. Other than that, we have some miscellaneous monitoring to keep track of the # of logs files in the pg_xlog directory and the amount of available disk space on all the servers. Ideally, if the # of log files starts to grow beyond a certain threshold, that indicates something went wrong with the log shipping and we'll investigate to see what the problem is. I think this is a pretty good strategy, but I've been so caught up in this I may not be seeing the forest through the trees so I thought I'd ask for a sanity check here. Thanks, Bryan -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general