Try running pgstattuple on some of the tables that have not had vacuum full, and some of those that have. It will tell you how many dead tuples there are in the table, which is an indicator of how seriously the table needs a full vacuum.
Run IPCS on the Unix command line to see how much of your SHMMAX is being used? You may have reached a threshold in one of the 6 Postgres parameters that use SHMMAX causing a slowdown, like max_fsm_pages (see the report at the end of a database vacuum full output).
On 7/28/06, Nolan Cafferky <Nolan.Cafferky@xxxxxxxxxxxxxxxxxx> wrote:
Synopsis: VACUUM ANALYZE on full database used to take just a few
minutes, now it takes several hours, with no apparant improvement in
successive runs.
Details:
I have a production database server hosting two heavily used databases
and not much else. We're currently running postgres 8.0.8. Normally we
have a VACUUM ANALYZE run nightly on both databases, which only takes a
couple of minutes each to complete. We also have a report that runs
hourly on one of the databases and dumps a large amount of data into a
materialized view. It normally takes 10-20 minutes (we could probably
optimize it, but it's never made it up the priority list).
Anyway, about two nights ago, the hourly report started running
indefinitely, and we've had to turn it off, after having 16 copies of it
waiting in line for the first to finish. Since then, VACUUM ANALYZE has
been taking several hours instead of several minutes on both databases.
Yesterday I ran the VACUUM ANALYZE manually on both databases, hoping
that there was just some transient cleanup problem, but we've had the
same results today.
What would cause this, and what can I do to fix it? For the moment, I'm
going to claim the "we didn't change anything!" mantra - no development
we've done in the past few days seems like it would significantly
influence both databases. The so far untried ideas I've had are:
* Try out the autovacuum service
* Re-index tables (this hasn't been done for at least months, maybe never)
* Do some selective VACUUM FULL on high-use tables (materialized view
for report seems like a likely culprit, but also seems like it wouldn't
influence both databases)
* Restart postgres, restart the machine itself, and other useless handwaving
\begin{more-background-information}
* The database server is a quad Opteron, about 2GHz each. 8 GB of RAM,
and a several hard disk RAID. It's burly. I believe we're running on a
Gentoo linux installation, although postgres was installed from source.
Again, we're running postgres 8.0.8. Here's some sample output from a
"vmstat 1 5" that I just ran:
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us sy
id wa
0 0 1208 5658464 0 2256384 0 0 554 344 1 1 10
2 83 6
1 0 1208 5640272 0 2273928 0 0 24 476 1405 1885 12
3 83 2
1 0 1208 5652368 0 2258628 0 0 0 560 1194 663 6
1 91 2
0 0 1208 5653392 0 2259104 0 0 16 750 1979 4362 15
4 78 2
1 0 1208 5649744 0 2259716 0 0 24 661 1651 3114 21
4 73 2
* Yes, so far we've been doing a direct VACUUM ANALYZE on everything,
plus VACUUM FULL ANALYZE on a few tables, instead of using the
autovacuum service like we should. It seems like there wouldn't be such
an abrupt change in performace because of that.
* Shortly after killing the 16 or so backed-up reports, the partition
postgres had the data/subtrans directory in filled up, and we had a
bunch of "No space left on device" errors for a minute or two. The
partitions do deserve some rearranging, but for now we've made some
adjustments and postgres is wallowing in free disk space.
\end{more-background-information}
Suggestions?
--
Nolan Cafferky
Software Developer
IT Department
RBS Interactive
nolan.cafferky@xxxxxxxxxxxxxxxxxx
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq