Brian: One approach we use for large tables is to partition and then drop partitions as the data becomes obsolete. This way you never have the problem. Our general rule is to never delete data from a table because it is too slow. We have found this to be the preferred approach regardless of database platform. -Jerry Jerry Champlin|Absolute Performance Inc.|Mobile: 303-588-2547 -----Original Message----- From: pgsql-performance-owner@xxxxxxxxxxxxxx [mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] On Behalf Of Brian Cox Sent: Monday, February 02, 2009 11:18 AM To: pgsql-performance@xxxxxxxxxxxxxx Subject: Deleting millions of rows I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from ts_defects; Result: out of memory/Can't allocate size: 32 I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into or queries on this table performed significantly slower. I tried a vacuum analyze, but this didn't help. To fix this, I dumped and restored the database. 1) why can't postgres delete all rows in a table if it has millions of rows? 2) is there any other way to restore performance other than restoring the database? Thanks, Brian -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance