On Feb 15, 2008, at 4:56 AM, if wrote:
We use postgresql as a backend to our email gateway, and keep al emails for in database. Using postgres version 7.4.8 (yes, i know it's old), and rather specific table schema (the application was desined that way) -- all emails split into 2kb parts and fed up into pg_largeobject. So, long story short, i now have a catch-22 situation -- database using about 0.7TB and we are running out of space ;-) I can delete some old stuff but i cannot run full vacuum to reclaim disk space (i takes way more than full weekend) and i also cannot dump/restore as there's no free space (2x database) So, with this restrictions aplied, i figured out that i can somehow zero out all old entries in pg_largeobject or even physically delete these files, and rebuild all neccesary indexes. What is the best way to do this? IMO, dd'ing /dev/zero to this files will cause postgres to reinitialize these empty blocks, and after this will still need to vacuum full over 0.7TB, am i right? And if i delete them, then start postmaster, there'll be lots of complaining but will the latest data be saved? How can i delete, for instance, first 70% of data reasonably fast?
You're still inserting new email, right? If so, why are you worried about reclaiming space? Just delete some stuff, let vacuum clean it up, and make sure that your FSM is big enough (easiest way to do that is to run vacuumdb -av).
You'll also want to periodically reindex, especially in 7.4.And yes, upgrade. At a minimum you need to get to the lastest 7.4, which doesn't require anything special.
-- Decibel!, aka Jim C. Nasby, Database Architect decibel@xxxxxxxxxxx Give your computer some brain candy! www.distributed.net Team #1828
Attachment:
smime.p7s
Description: S/MIME cryptographic signature