Thanks Joshua. Even if we have a long transaction running on the database, pg_dump shouldn't be affected right ? As it doesn't block readers or writers.
Before getting resources to setup stand by server, I just wanna make sure that we don't this issue on stand by too.On Thu, Jan 15, 2015 at 11:11 PM, Joshua D. Drake <jd@xxxxxxxxxxxxxxxxx> wrote:
On 01/15/2015 09:21 AM, girish R G peetle wrote:
Hi all,
We tried pg_dump with compression level set to zero on 1TB database.
Dump data rate started with 250GB/hr and gradually dropped to 30 GB/hr
with 2 hours time span. We might see this behavior on standby server
too, which will be undesirable.
Any explanation on why we see this behavior ?
Because you have a long running transaction that is causing bloat to pile up. Using pg_dump on a production database that size is a non-starter. You need a warm/hot standby or snapshot to do this properly.
JD
--
Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
"If we send our children to Caesar for their education, we should
not be surprised when they come back as Romans."