On 10/18/23 17:37, Scott Ribe wrote:
If you can use the directory format, then you can use multiple jobs to really speed up compressed dump (and restore).
It didn't occur to me to mention that I used it. Do people really still not
use -Fd?
Also, I'd suggest trying a run with lz4 compression--lz4 is particularly good at not slowing down when it encounters already-compressed data. Doesn't give really high compression ratios, but since you're already at -Z3, might be worth comparing.
I'm still using 9.6, so that feature isn't available yet. When I get the Pg
15 VMs stood up (Pg15 binaries are not available for RHEL 6), I'm definitely
going to try that.
(Personally, I stay away from zstd, as I've seen it create malformed backups because the encoder crashes with out-of-memory.)
--
Scott Ribe
scott_ribe@xxxxxxxxxxxxxxxx
https://www.linkedin.com/in/scottribe/
On Oct 18, 2023, at 4:30 PM, Ron <ronljohnsonjr@xxxxxxxxx> wrote:
In preparation for moving from 9.6 to something supported, I ran a pg_dump/pg_restore test (since the migrated databases will be on new servers, and we purge off old partitions and add new partitions, pg_upgrade and logical replication are off the table).
(The servers are VMs on ESX hosts, and on the same subnet.)
Our databases are chock full of bytea fields holding compressed images. pg_dump -Fd -Z6 took 25 minutes, and 5.5GB disk space. (remember, it's a test!), while pg_dump -Fd -Z0 only took 90 seconds, but consumed 15GB.
This isn't really surprising to anyone who's ever tried to gzip a jpg file...
Quite the speed increase if you can swallow the increased disk usage.
pg_dump -Z3 did the best: only 8.5 minutes, while using just 5.8GB disk space.
--
Born in Arizona, moved to Babylonia.
--
Born in Arizona, moved to Babylonia.