Search Postgresql Archives

Re: Dump large DB and restore it after all.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
On 5/07/2011 5:00 PM, Condor wrote:
Hello ppl,
can I ask how to dump large DB ?

Same as a smaller database: using pg_dump . Why are you trying to
split your dumps into 1GB files? What does that gain you?

Are you using some kind of old file system and operating system that
cannot handle files bigger than 2GB? If so, I'd be pretty worried
about running a database server on it.

Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.


As for gzip: gzip is almost perfectly safe. The only downside with
gzip is that a corrupted block in the file (due to a hard
disk/dvd/memory/tape error or whatever) makes the rest of the file,
after the corrupted block, unreadable. Since you shouldn't be storing
your backups on anything that might get corrupted blocks, that should
not be a problem. If you are worried about that, you're better off
still using gzip and using an ECC coding system like par2 to allow
recovery from bad blocks. The gzipd dump plus the par2 file will be
smaller than the uncompressed dump, and give you much better
protection against errors than an uncompressed dump will.

To learn more about par2, go here:

  http://parchive.sourceforge.net/


Thank you for info.

--
Craig Ringer


--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux