"Karl Wright" <kwright@xxxxxxxxxxxxx> writes: > Fine - but what if the previous vacuum is still in progress, and does not > finish in 5 minutes? Yes, well, there are problems with this design but the situation is already much improved in 8.2 and there are more improvements on the horizon. But it's likely that much of your pain is artificial here and once your database is cleaned up a bit more it will be easier to manage. > Well, the smaller tables don't change much, but the bigger tables have a lively > mix of inserts and updates, so I would expect these would need vacuuming often. Hm, I wonder if you're running into a performance bug that was fixed sometime back around then. It involved having large numbers of tuples indexed with the same key value. Every search for a single record required linearly searching through the entire list of values. If you have thousands of updates against the same tuple between vacuums you'll have the same kind of situation and queries against that key will indeed require lots of cpu. To help any more you'll have to answer the basic questions like how many rows are in the tables that take so long to vacuum, and how large are they on disk. On 7.4 I think the best way to get the table size actually is by doing "select relfilenode from pg_class where relname = 'tablename'" and then looking in the postgres directory for the files in base/*/<relfilenode>* The best information would be to do vacuum verbose and report the data it prints out. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com