Leonardo F wrote:
Is there anything else I can try to "help" postgres update those
index faster?
It sounds like your problem is that read/write activities on the indexes
are becoming increasingly large and random as more data is inserted.
There are two standard ways to improve on that:
1) Periodically rebuild the indexes and hope that the new version is
laid out better than what you've ended up with the random key value
insertions.
2) Optimize your database server configuration to perform better under
this particular situation. The usual set of tricks is to increase
shared_buffers, checkpoint_segments, and checkpoint_timeout to cut down
on the amount of time that the database has to write to the index
blocks, and improve the odds that ones it needs to read are already in
its cache.
It's hard to say whether increasing raw disk speed will help you as much
as you'd like or not. Index-related performance is often driven by
whether the working set needed to work on them efficiently can fit in
RAM or not. Once you've exceeded that, performance drops really fast,
and a linear increase in disk speed may not recover very much of that.
You can look at the size of all the active indexes using something like
the first query at http://wiki.postgresql.org/wiki/Disk_Usage to get an
idea how big they are relative to RAM. Sometimes having more memory is
the only good way to scale upwards in this situation while retaining
something close to original performance.
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg@xxxxxxxxxxxxxxx www.2ndQuadrant.us
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general