Brian Cox <brian.cox@xxxxxx> writes: > I have a largish (pg_dump output is 4G) database. The query: > select count(*) from some-table > was taking 120 secs to report that there were 151,000+ rows. > This seemed very slow. This db gets vacuum'd regularly (at least once > per day). I also did a manual 'vacuum analyze', but after it completed, > the query ran no faster. However, after dumping the database and > recreating it from the backup, the same query takes 2 secs. > Why the dramatic decrease? Presumably, the table was really bloated (lots of unused space). > Would 'vacuum full' have achieved the > same performance improvements? It would've compacted the table all right, but probably left the indexes worse off. > Is there anything else that needs to be done > regularly to prevent this performance degradation? I suspect that your FSM settings are too low, causing free space found by VACUUM to be forgotten about. You might also need to consider vacuuming more than once a day (there's a tradeoff between how often you vacuum and how much FSM space you need). regards, tom lane