I have a largish (pg_dump output is 4G) database. The query:
select count(*) from some-table
was taking 120 secs to report that there were 151,000+ rows.
This seemed very slow. This db gets vacuum'd regularly (at least once
per day). I also did a manual 'vacuum analyze', but after it completed,
the query ran no faster. However, after dumping the database and
recreating it
from the backup, the same query takes 2 secs.
Why the dramatic decrease? Would 'vacuum full' have achieved the
same performance improvements? Is there anything else that needs to be done
regularly to prevent this performance degradation?
postgresql 8.1.3 running on redhat es 4.
Thanks,
Brian