Thanks...comments below. > assuming you installed 9.0 from the yum.postgresql.com respositories, then, > `yum update postgresql90-server` and restart the postgresql-9.0 service > should do nicely. This worked. Took me to 9.0.17 for some reason. I'm OK with this. But the "vacuum full" was a terrible idea. I just spent 2 tranches of 5 hours each waiting for it to work. Many websites/blogs mention NOT to run vacuum full at all. Instead, run cluster. Is this better then? My table is huge. Over a billion rows. The idea of "pg_dump" and then pg_restore of a table might work, followed by reindexing. But that would also cause serious downtime. Anything else I can do? Just adjust the autovacuum information for example? My current settings are as follows: autovacuum = on autovacuum_max_workers = 5 autovacuum_vacuum_cost_delay = 20ms autovacuum_vacuum_cost_limit = 350 The table in question has over a billon rows. HOW do I know if this table is causing the issues? This is the only table that's heavily queried. Recently many times the PG server has been locked, and the pending queries have led to server outage. When I manually vacuumdb, this is the table where the process has stuck for hours. So I need to tune this table back to its usual performance. Appreciate any ideas! I'm sure there are much larger tables in the world than mine. What do they do? (Apart from replication etc) Thanks.