On 2/25/2011 7:26 AM, Vick Khera wrote:
On Thu, Feb 24, 2011 at 6:38 PM, Aleksey Tsalolikhin
<atsaloli.tech@xxxxxxxxx> wrote:
In practice, if I pg_dump our 100 GB database, our application, which
is half Web front end and half OLTP, at a certain point, slows to a
crawl and the Web interface becomes unresponsive. I start getting
check_postgres complaints about number of locks and query lengths. I
see locks around for over 5 minutes.
I'd venture to say your system does not have enough memory and/or disk
bandwidth, or your Pg is not tuned to make use of enough of your
memory. The most likely thing is that you're saturating your disk
I/O.
Check the various system statistics from iostat and vmstat to see what
your baseline load is, then compare that when pg_dump is running. Are
you dumping over the network or to the local disk as well?
Agreed... additionally, how much of that 100GB is actually changing?
You are probably backing up the same thing over and over. Maybe some
replication or differential backup would make your backup's smaller and
easier on your IO.
-Andy
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general