On Thu, 8 Jan 2015 11:05:54 +0530 girish R G peetle <giri.anamika0@xxxxxxxxx> wrote: > Hi all, > We have a customer who has 1TB database on a production server. They > are trying dump based backup of this large database. Following dump > command is being used. > Dump rate is around 12 GB/hr, which will take lot of time for the > backup to complete. This is affecting their production server. > Is there is a way to increase dump data rate ? > > pg_dump -U <User> -Fc -b --port=<Port> '<Db-Name>' > Do not use pg_dump compression, pipe output to xz % pg_dump -U <User> -Fc -b --port=<Port> '<Db-Name>' | xz -3 dump.xz or pipe xz output to other program. When I looked for the same problem in 8.3-8.4 versions, the bottleneck was in accessing TOAST tables, it's content was decompressed, dumped and recompressed again, don't know if it has changed in current versions. > > PostgreSQL version : 9.2.4 > Platform : Linux > > Thanks > Girish --- --- Eduardo Morras <emorrasg@xxxxxxxx> -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin