21 jan 2008 kl. 16.18 skrev Magnus Hagander:
On Mon, Jan 21, 2008 at 04:05:13PM +0100, Henrik wrote:
Hello list,
I know backup of pg is a well discussed topic and ranges in solution
from simple pg_dump to more advanced PITR with Slony.
Even though I've studied most of them I can't really decide the best
solution for a new situation and would be grateful for any inputs on
this.
The situation is as follow.
We want to do a daily backups from many installations to a remote
location and also want easy restores when disaster strikes.
Preferable
the backup site would only need an ftp server to store the files on.
My optimal solution would be differential pg_dumps but that is not
possible as far as I know. Doing pg_dumps every day is a little to
heavy even though the db's are not huge. I like the fact that I have
one big SQL file which is really simple to restore with.
Heavy where? If it's just heavy on the transfer, it might be
possible to
do the dump locally and then rsync the file off to the server.
(dumping
without compression is likely to make rsync a lot more efficient
here, but
I don't have any numbers to back up that guess)
True, I tried again on a test installation with a 26G database and
noticed that pg_dump didn't take the fraction of the time it took
before (only about 4 minuntes). Maybe I had some weird parameters
before. That is why I didn't want to do pg_dump but now this is a
feasible solution. Also doing the rsync should make the network impact
minimal. Is the something I should think about when running pg_dump to
make it faster?
Thanks!
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/