On Mon, Feb 27, 2012 at 2:13 PM, Samuel Gendler <sgendler@xxxxxxxxxxxxxxxx> wrote: > > > On Mon, Feb 27, 2012 at 6:59 AM, Reuven M. Lerner <reuven@xxxxxxxxxxxx> > wrote: >> >> >> So for now, we'll just try to DELETE faster than we INSERT, and combined >> with autovacuum, I'm hoping that this crisis will be averted. That said, >> the current state of affairs with these machines is pretty fragile, and I >> think that we might want to head off such problems in the future, rather >> than be surprised by them. >> >> > > For the record, one very effective long term solution for doing this and > continuing to be able to do this no matter how many rows have accumulated is > to partition the data tables over time so that you can just drop older > partitions. It does require code changes since relying on a trigger on the > parent table to distribute the inserts to the correct partition is much > slower than simply modifying your code to insert/copy into the correct > partition directly. But it is well worth doing if you are accumulating > large volumes of data. You can even leave old partitions around if you > don't need the disk space, since well-constructed queries will simply ignore > their existence, anyway, if you are only ever going back 30 days or less. > Indexes are on individual partitions, so you needn't worry about indexes > getting too large, either. If they're only inserting ~1 or 2G a day, a trigger is likely plenty fast. I've had stats dbs that grew up 10s or 20s of gigs a day and the triggers were never a performance problem there. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance