Search Postgresql Archives

Re: PITR Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chander Ganesan wrote:
Matthew T. O'Connor wrote:
I have done some googling for real world archive_command examples and haven't really found anything. The example in the PGSQL Docs are qualified by (This is an example, not a recommendation, and may not work on all platforms.)

I have it set as follows:
archive_command = 'rsync -a %p backup_server:/pgsql_pitr/%f'
It doesn't look to be a *bad* choice. I'd definitely recommend keeping a copy off of the current system - which you do here. You might also consider keeping a local copy (so you don't have to copy them back if you have to do a local recovery).

I know this can, but what I'm looking for is if someone has written some scripts that I can crib from that offer some additional features such as protection from overwriting an existing file, notification of the admin in case of failure etc..

Also, I'm concerned that this clients website has extended periods of time where it's very low traffic, which will result in the same WAL file being used for long periods of time and not getting archived. Does anyone have a tested script available for grabbing the most recent WAL file? I can write one myself, but it seems this is information that should be posted somewhere.
The checkpoint_timeout value should help with this - its default is 300 seconds, so you should checkpoint at least once every 5 minutes.

I don't see how checkpoint_timeout is relevant. Just because we checkpoint doesn't mean the WAL file will get archived. I have to have 16M of WAL traffic before a file gets archived regardless of check-pointing, or am I missing something?

You could setup a 'hot standby' system that uses a tool like cron to periodically sync your pg_xlog directory to your backup server (or just sync it so you have it..) - which might be useful if you go for long periods of time between checkpoints. A common scenario is to place one server into a "constant recovery" mode by using a restore_command that waits for new logs to be available before copying them. Periodically sync your pg_xlog directory in this case to ensure that when you need to recover you'll have most of what you need...but perhaps not all.

I say the "hot standby" is a common scenario, yet I'm not sure it's even possible since the docs only mention it in passing, and I wasn't able to find anyone example script that implements a restore_command that does this. Am I missing something that is obvious?

Thanks,

Matt




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux