Our dba quit last week leaving me with an interesting problem. We have a table currently using 33gb worth of space for only 152mb worth of data because of bad processes or autovacuum not being aggressive enough. I was able to confirm the size difference by doing a create table as select along with a second test of restoring the table from the dump file to a dev machine. There is a very large list of foreign key relationships that I'm not including for the sake of brevity. The database version is 8.4.1 The old DBA had said that vacuum full would take days to complete, and we don't have that much of a window. So I was considering using the to force a full table rewrite. In testing on a dev machine it only took about five minutes. I do not have as much hands on experience with postgres so I wanted to get thoughts on what is considered the proper way to deal with this kind of situation. Any comments would be welcome. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Massive-table-bloat-tp5736111.html Sent from the PostgreSQL - admin mailing list archive at Nabble.com. -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin