> INFO: index "job_log_id_pkey" now contains 10496152 row versions in
> 59665 pages
See the 10496152 above? That means you have 10496152 rows of data in your
table. If those, only 365000 are alive. That means you have basically
never vacuumed this table before, correct?
Almost correct :| I have vacuumed this table monthly (obviously not nearly
enough), but it
is basically a log of events of which there are a very large number of each
day.
Every update or delete creates a new dead row. count(*) scans the whole
table, dead rows included. That's why it takes so long, the table acts as
though it has 10496152 rows when doing sequential scans.
Oh! This explains my problems.
Do a VACCUM FULL on it or CLUSTER it on on a index, both of which will
empty
out all the free space and make it behave as it should. Note; VACUUM FULL
will take quite a while and requires an exclusive lock on the table.
CLUSTER also requires an exclusive lock but should be a lot faster for this
table.
Oh, and get autovacuum setup and working, posthaste.
The exclusive lock is going to cause problems for me since the table is very
active. Is there a way of getting around that or do I need to schedule the
application that accesses this table?
I'm running version 8.0. Is there autovacuum for this version too?
Regards,
Bealach