We're thinking we might set up vacuum_cost_limit to around 100 and put
vacuum_cost_delay at 100 and then just run vacuumdb in a cron job every
15 minutes or so, does this sound silly?
It doesn't sound completely silly, but if you are doing inserts and not
updates/deletes then there's not anything for VACUUM to do, really.
An ANALYZE command might get the same result with less effort.
I think that perhaps the fact we are doing updates in the secondary
table to track the import is the culprit here. It gets updated for each
item inserted into the main table, so even though it has 500 rows, it
ended up with about 2million dead tuples, which left a lot to be desired
in terms of seq scan speed. Vacuum full cleared this up, so I assume a
frequent regular vacuum would keep it in tip top condition.
We are using PG 8.0.1.
Thanks for your help Tom.
--
David Mitchell
Software Engineer
Telogis
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match