Re: pg_dump -Z6 (the default) can be pretty slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you can use the directory format, then you can use multiple jobs to really speed up compressed dump (and restore).

Also, I'd suggest trying a run with lz4 compression--lz4 is particularly good at not slowing down when it encounters already-compressed data. Doesn't give really high compression ratios, but since you're already at -Z3, might be worth comparing.

(Personally, I stay away from zstd, as I've seen it create malformed backups because the encoder crashes with out-of-memory.)

--
Scott Ribe
scott_ribe@xxxxxxxxxxxxxxxx
https://www.linkedin.com/in/scottribe/



> On Oct 18, 2023, at 4:30 PM, Ron <ronljohnsonjr@xxxxxxxxx> wrote:
> 
> In preparation for moving from 9.6 to something supported, I ran a pg_dump/pg_restore test (since the migrated databases will be on new servers, and we purge off old partitions and add new partitions, pg_upgrade and logical replication are off the table).
> 
> (The servers are VMs on ESX hosts, and on the same subnet.)
> 
> Our databases are chock full of bytea fields holding compressed images. pg_dump -Fd -Z6 took 25 minutes, and 5.5GB disk space. (remember, it's a test!), while pg_dump -Fd -Z0 only took 90 seconds, but consumed 15GB.
> 
> This isn't really surprising to anyone who's ever tried to gzip a jpg file...
> 
> Quite the speed increase if you can swallow the increased disk usage.
> 
> pg_dump -Z3 did the best: only 8.5 minutes, while using just 5.8GB disk space.
> 
> -- 
> Born in Arizona, moved to Babylonia.
> 
> 







[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux