On Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <aptklassen@xxxxxxxxx>
wrote:
Running postgres 8.2.5
I have a table that has 5 indices, no foreign keys or any
dependency on any other table. If delete the database and
start entering entries, everything works very well until I get
to some point (let's say 1M rows). Basically, I have a somewhat
constant rate of inserts/updates that go into a work queue and then
get passed to postgres. The work queue starts filling up as the
responsiveness slows down. For example at 1.5M
rows it takes >2 seconds for 300 inserts issued in one transaction.
Prior to this point I had added regular VACUUM ANALYZE on
the table and it did help. I increased maintenance work memory to
128M. I also set the fillfactor on the table indices to 50% (not sure
if that made any difference have to study results more closely).
In an effort to figure out the bottleneck, I DROPed 4 of the indices
on the table and the tps increased to over 1000. I don't really know
which index removal gave the best performance improvement. I
dropped 2 32-bit indices and 2 text indices which all using btree.
The cpu load is not that high, i.e. plenty of idle cpu. I am running an
older
version of freebsd and the iostat output is not very detailed.
During this time, the number is low < 10Mbs. The system has an
LSI Logic MegaRAID controller with 2 disks.
Any ideas on how to find the bottleneck/decrease overhead of index usage.
Thanks.
If you are filling an empty table it is generally faster to create the
indexes after the data import.
Of course if this is a live table or you need the indexes during the
import, this is not an option.
I find it generally faster to lightly preprocess the data and generate
text files that I then import using COPY, then doing the rest of the
processing in SQL.
How much RAM in the box ? size of the data & indexes ?