Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > "Kevin Grittner" <Kevin.Grittner@xxxxxxxxxxxx> writes: >> Since the dump to custom format ran longer than the full pg_dump >> piped directly to psql would have taken, the overall time to use >> this technique is clearly longer for our databases on our hardware. > > Hmmm ... AFAIR there isn't a good reason for dump to custom format > to take longer than plain text dump, except for applying > compression. Maybe -Z0 would be worth testing? Or is the problem > that you have to write the data to a disk file rather than just > piping it? I did some checking with the DBA who normally copies these around for development and test environments. He confirmed that when the source and target are on the same machine, a pg_dump piped to psql takes about two hours. If he pipes across the network, it runs more like three hours. My pg_dump to custom format ran for six hours. The single-transaction restore from that dump file took two hours, with both on the same machine. I can confirm with benchmarks, but this guy generally knows what he's talking about (and we do create a lot of development and test databases this way). Either the compression is tripling the dump time, or there is something inefficient about how pg_dump writes to the disk. All of this is on a RAID 5 array with 5 drives using xfs with noatime,nobarrier and a 256MB BBU controller. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance