On 5/07/2011 5:00 PM, Condor wrote:
Hello ppl, can I ask how to dump large DB ?
Same as a smaller database: using pg_dump . Why are you trying to split your dumps into 1GB files? What does that gain you?
Are you using some kind of old file system and operating system that cannot handle files bigger than 2GB? If so, I'd be pretty worried about running a database server on it.
As for gzip: gzip is almost perfectly safe. The only downside with gzip is that a corrupted block in the file (due to a hard disk/dvd/memory/tape error or whatever) makes the rest of the file, after the corrupted block, unreadable. Since you shouldn't be storing your backups on anything that might get corrupted blocks, that should not be a problem. If you are worried about that, you're better off still using gzip and using an ECC coding system like par2 to allow recovery from bad blocks. The gzipd dump plus the par2 file will be smaller than the uncompressed dump, and give you much better protection against errors than an uncompressed dump will.
To learn more about par2, go here: http://parchive.sourceforge.net/ -- Craig Ringer POST Newspapers 276 Onslow Rd, Shenton Park Ph: 08 9381 3088 Fax: 08 9388 2258 ABN: 50 008 917 717 http://www.postnewspapers.com.au/ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general