On Tue, Aug 07, 2007 at 02:12:29PM -0500, Kevin Grittner wrote: > > 2) Have archive_command copy to someplace on the database server, and > > have another process copy from there to both the local backup as well as > > the central backup. > > A possible option; although if the rsync daemon on the file server proves > reliable, I don't see the benefit over having the archive command make a copy > on the database server which flows to the file server and from the file server > back to the central site. I'd rather add the load to the file server than the > database server. Yeah, if you can make that work reliably, then I agree it's probably better. > > Copying a 16MB file that's already in memory isn't exactly an intensive > > operation... > > That's true for the WAL files. The base backups are another story. We will > normally have a database vacuum analyze between the base backup and the users > being in there to care about performance, but that's not always the case -- > sometimes jury trials go late into the night and could overlap with this a > base backup. And some judges put in a lot of late hours; although they don't > tend to bang on the database very heavily, they hate to be made to wait. Ahh... well, that's something where rsync could actually help you since it allows you to put a bandwidth cap on it. Another option is that some OSes (FreeBSD for one) will respect process priority when it comes to scheduling IO as well, so if you nice the backup process it hopefully wouldn't impact the database as much. -- Decibel!, aka Jim Nasby decibel@xxxxxxxxxxx EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Attachment:
pgpxL2etXkRbD.pgp
Description: PGP signature