Re: pg_dump far too slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mar 21, 2010, at 8:50 AM, David Newall wrote:

> Tom Lane wrote:
>> I would bet that the reason for the slow throughput is that gzip
>> is fruitlessly searching for compressible sequences.  It won't find many.
>> 
> 
> 
> Indeed, I didn't expect much reduction in size, but I also didn't expect 
> a four-order of magnitude increase in run-time (i.e. output at 
> 10MB/second going down to 500KB/second), particularly as my estimate was 
> based on gzipping a previously gzipped file.  I think it's probably 
> pathological data, as it were.  Might even be of interest to gzip's 
> maintainers.
> 

gzip -9 is known to be very very inefficient.  It hardly ever is more compact than -7, and often 2x slower or worse.
Its almost never worth it to use unless you don't care how long the compression time is.

Try -Z1

at level 1 compression the output will often be good enough compression at rather fast speeds.  It is about 6x as fast as gzip -9 and typically creates result files 10% larger.

For some compression/decompression speed benchmarks see:

http://tukaani.org/lzma/benchmarks.html



> -- 
> Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux