[tv@xxxxxxxx] > The table was quite huge (say 20k of products along with detailed > descriptions etc.) and was completely updated and about 12x each day, i.e. > it qrew to about 12x the original size (and 11/12 of the rows were dead). > This caused a serious slowdown of the application each day, as the > database had to scan 12x more data. The tables we had problems with are transaction-type tables with millions of rows and mostly inserts to the table ... and, eventually some few attributes being updated only on the most recent entries. I tried tuning a lot, but gave it up eventually. Vacuuming those tables took a long time (even if only a very small fraction of the table was touched) and the performance of the inserts to the table was reduced to a level that could not be accepted. By now we've just upgraded the hardware, so it could be worth playing with it again, but our project manager is both paranoid and conservative and proud of it, so I would have to prove that autovacuum is good for us before I'm allowed to turn it on again ;-) ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq