I have an application that may add a couple million rows per day so I vacuum nightly. The tables never get to more than about 10 million rows before I move off the interesting information to other media. Somewhat often, my vacuums don't complete as shown from this ps command (usually VACUUMs take a couple minutes) 4442 ? R 949:07 postgres: postgres tag 127.0.0.1(33420) VACUUM and the CPU for postmaster (hyperthreaded linux) will be in the high 90s. The locks look like: scouts=# select * from pg_locks; relation | database | transaction | pid | mode | granted ----------+----------+-------------+-------+--------------------------+--------- | | 12826125 | 11642 | ExclusiveLock | t 16839 | 17230 | | 11642 | AccessShareLock | t 17251 | 17230 | | 4442 | RowExclusiveLock | t 17251 | 17230 | | 4442 | ShareUpdateExclusiveLock | t 17246 | 17230 | | 4442 | ShareUpdateExclusiveLock | t 17246 | 17230 | | 4442 | ShareUpdateExclusiveLock | t | | 12817402 | 4442 | ExclusiveLock | t (7 rows) I all inserts and maintenance through JDBC and may have inserts going on while a different java thread calls the VACUUM command. Any thoughts? Can I recover without dropping the server? Thanks for any help, Scott