Hi, Thomas, me@xxxxxxxxxxxxx wrote: >> Try a), b), and c) in order on the "offending" tables as they address >> the problem at increasing cost... > > thanks alot for the detailed information! the entire concept of vacuum > isn't yet that clear to me, so your explanations and hints are very much > appreciated. i'll defenitely try these steps this weekend when the next > full vacuum was scheduled :-) Basically, VACUUM scans the whole table and looks for pages containing garbage rows (or row versions), deletes the garbage, and adds those pages to the free space map (if there are free slots). When allocating new rows / row versions, PostgreSQL first tries to fit them in pages from the free space maps before allocating new pages. This is why a high max_fsm_pages setting can help when VACUUM freqency is low. VACUUM FULL additionally moves rows between pages, trying to concentrate all the free space at the end of the tables (aka "defragmentation"), so it can then truncate the files and release the space to the filesystem. CLUSTER basically rebuilds the tables by copying all rows into a new table, in index order, and then dropping the old table, which also reduces fragmentation, but not as strong as VACUUM FULL might. ANALYZE creates statistics about the distribution of values in a column, allowing the query optimizer to estimate the selectivity of query criteria. (This explanation is rather simplified, and ignores indices as well as the fact that a table can consist of multiple files. Also, I believe that newer PostgreSQL versions allow VACUUM to truncate files when free pages happen to appear at the very end of the file.) HTH, Markus -- Markus Schaber | Logical Tracking&Tracing International AG Dipl. Inf. | Software Development GIS Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org