On Fri, Aug 04, 2006 at 03:46:09PM -0400, Matthew T. O'Connor wrote: > Chander Ganesan wrote: > >Matthew T. O'Connor wrote: > >>I have done some googling for real world archive_command examples and > >>haven't really found anything. The example in the PGSQL Docs are > >>qualified by (This is an example, not a recommendation, and may not > >>work on all platforms.) > >> > >>I have it set as follows: > >>archive_command = 'rsync -a %p backup_server:/pgsql_pitr/%f' > >It doesn't look to be a *bad* choice. I'd definitely recommend > >keeping a copy off of the current system - which you do here. You > >might also consider keeping a local copy (so you don't have to copy > >them back if you have to do a local recovery). > > I know this can, but what I'm looking for is if someone has written some > scripts that I can crib from that offer some additional features such as > protection from overwriting an existing file, notification of the admin > in case of failure etc.. Take a look at http://pgfoundry.org/projects/pgpitrha/ Also, note that in 8.1, you have to manually archive the last WAL file after pg_stop_backup(), or you backup is useless until that WAL file fills up on its own and is archived. There's a bunch of new functions in 8.2 that will make a lot of this stuff easier, btw. > >>Also, I'm concerned that this clients website has extended periods of > >>time where it's very low traffic, which will result in the same WAL > >>file being used for long periods of time and not getting archived. > >>Does anyone have a tested script available for grabbing the most > >>recent WAL file? I can write one myself, but it seems this is > >>information that should be posted somewhere. > >The checkpoint_timeout value should help with this - its default is > >300 seconds, so you should checkpoint at least once every 5 minutes. > > I don't see how checkpoint_timeout is relevant. Just because we > checkpoint doesn't mean the WAL file will get archived. I have to have > 16M of WAL traffic before a file gets archived regardless of > check-pointing, or am I missing something? You're not. > >You could setup a 'hot standby' system that uses a tool like cron to > >periodically sync your pg_xlog directory to your backup server (or > >just sync it so you have it..) - which might be useful if you go for > >long periods of time between checkpoints. A common scenario is to > >place one server into a "constant recovery" mode by using a > >restore_command that waits for new logs to be available before copying > >them. Periodically sync your pg_xlog directory in this case to ensure > >that when you need to recover you'll have most of what you need...but > >perhaps not all. > > I say the "hot standby" is a common scenario, yet I'm not sure it's even > possible since the docs only mention it in passing, and I wasn't able to > find anyone example script that implements a restore_command that does > this. Am I missing something that is obvious? See above pgfoundry link. :) -- Jim C. Nasby, Sr. Engineering Consultant jnasby@xxxxxxxxxxxxx Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461