"Filip Rembiałkowski" <plk.zuber@xxxxxxxxx> wrote: > > 2007/9/18, Joshua D. Drake <jd@xxxxxxxxxxxxxxxxx>: > > > If you overrun your max_fsm_pages then vacuum analyze IS NOT ENOUGH. If > > you do not overrun your max_fsm_pages, yes vacuum analyze can deal with > > the issue. > > Are you sure? I have a situation where above is no true. postgres > version 8.1.8. while vacuum verbose says: > > INFO: free space map contains 2329221 pages in 490 relations > DETAIL: A total of 2345744 page slots are in use (including overhead). > 2345744 page slots are required to track all free space. > Current limits are: 10000000 page slots, 1000 relations, using 58698 KB. > > ... and we have constant problem with index bloat and need to REINDEX > frequently. > > the database is very redundant and has quite hight data retention rate > (it's an ecommerce site) I've been involved in a number of the discussions on this, and I think part of the confusion stems from the fact that "index bloat" is an ambiguous term. If the index gets large enough that it no longer fits in shared memory, and reindexing it will reduce its size to where it _will_ fit in shared memory, then the index _could_ be said to be "bloated". However, an equally valid solution to that problem is to increase the amount of shared memory available (possibly by adding RAM). Unfortunately, folks like Phoenix are looking for yes/no answers, and with many of these questions, the _correct_ answer is "it depends on your workload" If you find that reindexing improves performance, then you should investigate further. Depending on the exact nature of the problem, there are many possible solutions, three that come to mind: * Add RAM/SHM * REINDEX on a regular schedule * (with newer version) reduce the fill factor and REINDEX -- Bill Moran http://www.potentialtech.com ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match