On 03/19/2012 01:51 PM, Guillaume Lelarge wrote:
On Sun, 2012-03-18 at 21:06 -0700, Aleksey Tsalolikhin wrote:
Hi. When pg_dump runs, our application becomes inoperative (too
slow)....
Depends on what your app is doing. It doesn't block any usual use of the
database: DML are all accepted. But you cannot drop a table that pg_dump
must save, you cannot change its definition. So there are some DDL
commands you cannot use during a dump....
Dumping may not technically block access but it *does*, of course,
consume resources.
Most obvious is that it requires reading all table data in entirety.
This will cause competition for disk access and may cause active data to
be temporarily pushed out of cache.
You also have to write the data somewhere. If it is on the same drive as
your database, you will have write competition. If it is on another
machine it will use network resources.
If you are compressing the data either externally or using a compressed
dump format, you will need more CPU to handle the compression on
whatever machine is doing the actual compression.
To assist, we need more info. Tell us the database size, some details
about your dump process (same or different machine, compression, etc.),
how long your dumps take to run, how many backends are typically running
and how many you reach during a dump, whether or not any web processes
alter tables and other info you think may be of use.
Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general