On Mar 3, 2006, at 11:54 AM, Simon Riggs wrote:
On Thu, 2006-03-02 at 16:38 -0600, Thomas F. O'Connell wrote:
Ideally, I'd be able to take a base backup of a production system,
copy it to a remote system, which is also the repository for segment
files generated by archive_command, and complete the recovery process
outlined in the docs. From that point, it would make sense to me that
I should be able to continuously replay WAL files against the new
database (possibly as soon as archive_command generates a new one)
without having to purge my data directory. Is that a reasonable
assumption?
Yes, it was designed to be able to do this.
From the docs, I'm having a hard time determining which steps to
edit or omit in order to execute this scenario. Is it possible for
you (or anyone else on the list) to present an extension of section
23.3.3 <http://www.postgresql.org/docs/8.1/static/backup-
online.html#BACKUP-PITR-RECOVERY> that covered the continuous replay
scenario? I'd be happy to help contribute a patch to the docs once I
understand the procedure a bit better.
For instance, it's not
immediately clear from the docs what happens to the segment files
after recovery_command runs during the recovery scenario. It says
those segments are copied from the archive directory, but then what?
Are they recycled as in a basic postgres installation?
They overwrite each other, thus avoiding a build up of logs. It is
designed to support "infinite" recovery.
Is there any management of this process that I'd need to account for
in related scripts, recover_command or otherwise?
As long as you've read the documented caveats there are no design
limitations...but I would restart from a base backup weekly, to be
sure.
It's all about certainty after all.
Sounds good. Thanks for the tips!
--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC
http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)