On Thu, Feb 23, 2012 at 12:39 AM, Reuven M. Lerner <reuven@xxxxxxxxxxxx> wrote: > Hi, everyone. I'm maintaining an application that exists as a "black box" > in manufacturing plants. The system is based on Windows, .NET, and > PostgreSQL 8.3. I'm a Unix kind of guy myself, but the application layer > and system administration are being handled by other people; I'm just the > PostgreSQL guy. > > Because of the nature of the application, we don't have direct control over > what happens. And it turns out that at one installation, we're quickly > running out of disk space. The database is already taking up about 200 GB > of space, and is growing by 1 GB or so a day. Switching disks, either to a > larger/faster traditional drive, or even to a SSD, is not an option. (And > yes, I know that SSDs have their own risks, but I'm just throwing that out > as one option.) > > Right now, the best solution to the space problem is to delete information > associated with old records, where "old" is from at least 30 days ago. The > old records are spread across a few tables, including many large objects. > (The application was written by people who were new to PostgreSQL, and > didn't realize that they could use BYTEA.) Basically, given a foreign key > B.a_id that points to table A, I want to DELETE all in B where A's creation > date is at least 30 days ago. > > Unfortunately, when we implemented this simple delete, it executed slower > than molasses, taking about 9 hours to do its thing. Is this 9 hours run time for deleting one day worth of data, or for deleting the entire accumulation of cruft that filled up the hard drive in the first place (which would be 170 days, if you have 200GB that accumulated at 1GB per day and you only need 30 days) ? Cheers, Jeff -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance