Hi Simon, Many thanks for this suggestion - this sounds ideal actually. My thoughts on this are, I would write a shell script that gets called for each file requested in the recovery.conf ... a) If the log file requested exists, copy it and exit with 0 status b) If the file doesn't exist, check if there is a "bring_online" flag file set in the log transfer area ... ... if so, exit with non-zero so the server can be bought online. ... if not, sleep for 5 or 10 minutes - within this time the next log may or may not have arrived c) After 5 or 10 minutes, repeat at step b So basically if I need the server bringing up, I can either manually touch bring_online in the log transfer folder and wait for the next 5/10 minute update, or have my monitoring system automatically touch that file when the connection to the primary database is broken, and switch the database server DNS to the standby server. Sounds good!?! Cheers for the idea Simon, now to get coding...! Andy Simon Riggs wrote: On Wed, 2006-02-22 at 16:26 +0000, Andy Shellam wrote:Is this scenario possible - that you can keep rolling forward over log files as long as necessary, or do you always have to start from a base backup? Nothing is changing on the spare, it's literally a sitting duck.You'll need a restore_command that is a script that sits in a wait loop when the file it is asked for is not available yet, or other conditions have occurred such as notification of switchover (manually or otherwise). When those conditions occur the script should return a non-zero error condition. There should be no recovery_target* settings. Best Regards, Simon Riggs ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org !DSPAM:43fdc5ac12881700010740! --
|