Thanks all for the feedback, especially on the topic of version. I plan on pushing that whenever I have an opening. But for now I've been placed in fire fighter mode with this one at the top of the list (long story behind it but it involved massive customer service issue on our end) so I need to be as sure as possible that a working plan is in place and ready to go. >From the earlier comments it sounds like using clustering on the table is the best way to go since it will compact the space used for data, and is even the way that current postgres versions go about doing a vacuum full. The downsides are that it would require a service window because of locking the table, and that it would need extra space to build the temporary copy of the table while things are being indexed. All that is something that I can work with and be able to do the work without restoring from dump, but of course I plan to make a fresh dump file before I start, just in case. I'm very grateful to all of you that have taken the time to give your opinions and advice and I'm feeling positive about this work. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Massive-table-bloat-tp5736111p5736176.html Sent from the PostgreSQL - admin mailing list archive at Nabble.com. -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin