Re: Massive table bloat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, Dec 11, 2012 at 8:11 AM, Michael Sawyers <msawyers@xxxxxxx> wrote:
Our dba quit last week leaving me with an interesting problem.

We have a table currently using 33gb worth of space for only 152mb worth of
data because of bad processes or autovacuum not being aggressive enough. I
was able to confirm the size difference by doing a create table as select
along with a second test of restoring the table from the dump file to a dev
machine.

There is a very large list of foreign key relationships that I'm not
including for the sake of brevity.

The database version is 8.4.1

The old DBA had said that vacuum full would take days to complete, and we
don't have that much of a window.  So I was considering using the  to force
a full table rewrite.  In testing on a dev machine it only took about five
minutes.

I do not have as much hands on experience with postgres so I wanted to get
thoughts on what is considered the proper way to deal with this kind of
situation.

Any comments would be welcome.


With only 33 GB of data, you might consider a full dump/drop-db/create-db/restore. You didn't say what sort of hardware you're using, or what sort of down time you could afford, but generally pg_dump and pg_restore are remarkably fast.  If you have spare/backup machines, you could even do trial runs to see how long it takes.

That would also give you the opportunity to upgrade to a new version of Postgres, although that might require changes to your app that you're not willing to make.  Even if you don't upgrade the version (you should at least upgrade to the latest 8.4.x release), dump/restore will fix your problem.

Craig James

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux