Jared Mauch wrote:
pg_dump is utilizing about 13% of the cpu and the
corresponding postgres backend is at 100% cpu time.
(multi-core, multi-cpu, lotsa ram, super-fast disk).
...
Any tips on getting pg_dump (actually the backend) to perform
much closer to 500k/sec or more? This would also aide me when I upgrade
pg versions and need to dump/restore with minimal downtime (as the data
never stops coming.. whee).
I would suggest running oprofile to see where the time is spent. There
might be some simple optimizations that you could do at the source level
that would help.
Where the time is spent depends a lot on the schema and data. For
example, I profiled a pg_dump run on a benchmark database a while ago,
and found that most of the time was spent in sprintf, formatting
timestamp columns. If you have a lot of timestamp columns that might be
the bottleneck for you as well, or something else.
Or if you can post the schema for the table you're dumping, maybe we can
make a more educated guess.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster