On Sun, Nov 13, 2011 at 8:42 PM, Robins Tharakan <robins.tharakan@xxxxxxxxxx> wrote: > Hi, > > Well, the 'complex' stuff is only as there for larger or high-traffic DBs. > Besides at 60GB that is a largish DB in itself and you should begin to try > out a few other backup methods nonetheless. That is moreso, if you are > taking entire DB backups everyday, you would save a considerable lot on > (backup) storage. Thanks. I usually keep only the last 6 days of it. And monthly backups as of Day 1. So it's not piling up or anything. What "other methods" do you recommend? That was in fact my question. Do I need to install some modules? > Anyway, as for pgdump, we have a DB 20x bigger than you mention (1.3TB) and > it takes only half a day to do a pgdump+gzip (both). One thing that comes to > mind, how are you compressing? I hope you are doing this in one operation > (or at least piping pgdump to gzip before writing to disk)? I'm gzipping with this command (this is my backup.sh)-- BKPFILE=/backup/pg/dbback-${DATA}.sql pg_dump MYDB -U MYDB_MYDB -f ${BKPFILE} gzip --fast ${BKPFILE} Is this good enough? Sadly, this takes up over 97% of the CPU when it's running! -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general