The technique Jeff is speaking of below is exactly how we do it,
except we use file-system snapshots vs rsync.
The problem is how slow log application is when recovering since it's
a single process, and very slow at that.
-kg
On Jan 26, 2009, at 11:58 AM, Jeff wrote:
On Jan 26, 2009, at 2:42 PM, David Rees wrote:
Lots of people have databases much, much, bigger - I'd hate to
imagine
have to restore from backup from one of those monsters.
If you use PITR + rsync you can create a binary snapshot of the db,
so restore time is simply how long it takes to untar / whatever it
into place. Our backup script basically does:
archive backup directory
pg_start_backup
rsync
pg_stop_backup
voila. I have 2 full copies of the db. You could even expand it a
bit and after the rsync & friends have it fire up the instance and
run pg_dump against it for a pg_restore compatible dump "just in
case".
It takes a long time to restore a 300GB db, even if you cheat and
parallelify some of it. 8.4 may get a pg_restore that can load in
parallel - which will help somewhat.
--
Jeff Trout <jeff@xxxxxxxxxxxxx>
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx
)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance