On 01/14/2015 12:20 PM, Eduardo Morras wrote:
On Thu, 8 Jan 2015 11:05:54 +0530
girish R G peetle <giri.anamika0@xxxxxxxxx> wrote:
Hi all,
We have a customer who has 1TB database on a production server. They
are trying dump based backup of this large database. Following dump
command is being used.
Dump rate is around 12 GB/hr, which will take lot of time for the
backup to complete. This is affecting their production server.
Is there is a way to increase dump data rate ?
pg_dump -U <User> -Fc -b --port=<Port> '<Db-Name>'
Do not use pg_dump compression, pipe output to xz
% pg_dump -U <User> -Fc -b --port=<Port> '<Db-Name>' | xz -3 dump.xz
or pipe xz output to other program.
When I looked for the same problem in 8.3-8.4 versions, the bottleneck was in accessing TOAST tables, it's content was decompressed, dumped and recompressed again, don't know if it has changed in current versions.
Don't do this. You are still looking at an extremely slow dump. Instead
set up a warm or hot standby or use pg_basebackup.
JD
--
Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
"If we send our children to Caesar for their education, we should
not be surprised when they come back as Romans."
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin