On Thu, Jun 4, 2009 at 7:31 AM, Erik Aronesty<erik@xxxxxxx> wrote: > is there some way to view the level of "bloat that needs full" in each > table, so i could write a script that alerts me to the need of a > "vacuum full" without waiting for random queries to "get slow"? > > looking at the results of the "bloat query", i still can't see how to > know whether bloat is getting bad in an objective manner. One other thought on this... I think the main thing to consider is bloat as a percentage of table size. When you go to sequential scan the table, a table with as much bloat as data will take twice as long to scan, one with twice as much bloat as data will take three times as long to scan, and so on. If you're only ever doing index scans, the effect will be less noticeable, but in round figures comparing the amount of bloat to the amount of data is a good place to start. I usually find 3x is about where the pain starts to hit. Also, small tables can sometimes tolerate a higher percentage of bloat than large ones, because those table scans tend to be fast anyway. A lot of times bloat happens at one particular time and just never goes away. Leaving an open transaction around for an hour or two can bloat all of your tables, and they'll never get de-bloated on their own without help. It would be nice if VACUUM had even a little bit of capability for incrementally improving this situation, but currently it doesn't. So when you mention running for a year, it's not unlikely that you had one bad day (or several days in a row) when you collected all of that bloat, rather than accumulating it gradually over time. ...Robert -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance