Jan <pgsql.admin@xxxxxxxxxxxxxxx> writes: > That is, autovacuum on that table does not finish (even after six days > it is still running). To narrow down the problem, I stopped autovacuum, > temporarily disabled it (but ONLY on that particular table; not a > permanent solution, I know) and started a manual VACUUM ANALYZE on that > table via PGAdmin instead. In analogy to the autovacuum, the manually > started process is also running virtually endless (until I terminate it > after some days) but PGadmin's verbose output (see below) tells me at > least that it (apparently) tries to process the same table over and over > again. I'm definitely not a Postgres guru, but this can't be the > expected behaviour, even on such a big table, right?Did I overlook > something? No, that looks fairly normal: each time it fills up memory with TIDs of dead tuples, it has to go and scan the indexes to remove index entries pointing at those tuples. You could increase maintenance_work_mem to reduce the number of scans through the indexes, but it looks like you've already got that set to 1GB or so, so I'm not sure that increasing it further would be a good idea. The good news is you are making progress. Perhaps you should just wait it out. However, it'd be good to try to estimate how many dead tuples there are (have you checked the physical size of the table?). If there are vastly more dead tuples than live ones, a VACUUM FULL might be a less painful way to get out of this, though it would lock out other accesses to the table so you might have to take some downtime to do it. While you're waiting, you should try to figure out how you got into a state with so many dead tuples, and think about how to readjust things so it doesn't happen again. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin