Hi Been reading some old threads (pre 9.x version) and it seems that the consensus is to avoid doing massive deletes from a table as it'll create so much unrecoverable space/gaps that vacuum full would be needed. Etc. Instead, we might as well do a dump/restore. Faster, cleaner. This is all well and good, but what about a situation where the database is in production and cannot be brought down for this operation or even a cluster? Any ideas on what I could do without losing all the live updates? I need to get rid of about 11% of a 150 million rows of database, with each row being nearly 1 to 5 KB in size... Thanks! Version is 9.0.4. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general