Search Postgresql Archives

Re: pg_dump slower than pg_restore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 7/3/2014 10:36 AM, Bosco Rama wrote:
If those large objects are 'files' that are already compressed (e.g.
most image files and pdf's) you are spending a lot of time trying to
compress the compressed data ... and failing.

Try setting the compression factor to an intermediate value, or even
zero (i.e. no dump compression).  For example, to get the 'low hanging
fruit' compressed:
     $ pg_dump -Z1 -Fc ...

IIRC, the default value of '-Z' is 6.

As usual your choice will be a run-time vs file-size trade-off so try
several values for '-Z' and see what works best for you.

That's interesting. Since I gzip the resulting output, I'll give -Z0 a try. I didn't realize that any compression was on by default.

Thanks for the tip...



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux