Tom Lane wrote: > jesper@xxxxxxxx writes: >> If i understand the technicalities correct then INSERT/UPDATES to the >> index will be accumulated in the "maintainance_work_mem" and the "user" >> being unlucky to fill it up will pay the penalty of merging all the >> changes into the index? > > You can turn off the "fastupdate" index parameter to disable that, > but I think there may be a penalty in index bloat as well as insertion > speed. It would be better to use a more conservative work_mem > (work_mem, not maintenance_work_mem, is what limits the amount of stuff > accumulated during normal inserts). Ok, I read the manual about that. Seems worth testing, hat I'm seeing is stuff like this: 2009-10-21T16:32:21 2009-10-21T16:32:25 2009-10-21T16:32:30 2009-10-21T16:32:35 2009-10-21T17:10:50 2009-10-21T17:10:59 2009-10-21T17:11:09 ... then it went on steady for another 180.000 documents. Each row is a printout from the application doing INSERTS, it print the time for each 1000 rows it gets through. It is the 38minutes in the middle I'm a bit worried about. work_mem is set to 512MB, that may translate into 180.000 documents in my system? What I seems to miss a way to make sure som "background" application is the one getting the penalty, so a random user doing a single insert won't get stuck. Is that doable? It also seems to lock out other inserts while being in this state. -- Jesper -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance