vaccuming very large table problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello list!

We use postgresql as a backend to our email gateway, and keep al
emails for in database. Using postgres version 7.4.8 (yes, i know it's
old), and rather specific table schema (the application was desined
that way) -- all emails split into 2kb parts and fed up into
pg_largeobject. So, long story short, i now have a catch-22 situation
-- database using about 0.7TB and we are running out of space ;-)
I can delete some old stuff but i cannot run full vacuum to reclaim
disk space (i takes way more than full weekend) and i also cannot
dump/restore as there's no free space (2x database)

So, with this restrictions aplied, i figured out that i can somehow
zero out all old entries in pg_largeobject or even physically delete
these files, and rebuild all neccesary indexes.

What is the best way to do this?
IMO, dd'ing /dev/zero to this files will cause postgres to
reinitialize these empty blocks, and after this will still need to
vacuum full over 0.7TB, am i right?
And if i delete them, then start postmaster, there'll be lots of
complaining but will the latest data be saved?

How can i delete, for instance, first 70% of data reasonably fast?

P.S.  Please cc me, as i'm not subscribed yet.
Thanks in advance!

regards,
if

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux