On 11/10/2014 03:34 AM, Condor wrote:
Hello, I found strange result when I use pg_dump described on postgresql site: http://www.postgresql.org/docs/9.3/static/backup-dump.html I have a database with 30 gb data and decide to archive it, postgresql is 9.3.5 x64_86, ext4 file system, kernel 3.14.18 Slackware 14.2 (current)
How did you determine there is 30GB of data?
First I use gzip with : pg_dump logdb | gzip > log.sql.gz After a few minute I have log.sql.gz with size 2 170 016 226 Well, that is strange and I dump database again with: pg_dump logdb | split -b 1024m - log.sql 20 files is generated and I zip them with: zip -r log.sql.zip logdir (because I move them in logdir) file size is : 2 170 020 867 Almost the same, but if I check size in archives there is a huge difference.
Any reason for not using pg_dump -Fc and get the built in compression?
$ gzip -l log.sql.gz compressed uncompressed ratio uncompressed_name 2170016226 3060688725 29.1% log_to.sql and $ unzip -v log.sql.zip *** snip *** -------- ------- --- ------- 20240557909 2170020867 89% 20 files Here is difference: with gzip I have 29.1% compress ratio and uncompressed size is 3 060 688 725 which means 3 GB and with zip I have 89% compress ratio and uncompressed size is 20 240 557 909 witch mean 20 GB. That is 7 times bigger. My question is: Is there some special config params that is not described in documentation here: http://www.postgresql.org/docs/9.3/static/backup-dump.html Or something need to be configured on my linux. And most important question for me is: Did the database dump is corrupt or not ? Regards, Hristo Simeonov
-- Adrian Klaver adrian.klaver@xxxxxxxxxxx -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general