> -----Original Message----- > From: Brian Cox > Subject: Deleting millions of rows > > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: > delete from ts_defects; > Result: out of memory/Can't allocate size: 32 I then did 10 > or so deletes to get rid of the rows. Afterwards, inserts > into or queries on this table performed significantly slower. > I tried a vacuum analyze, but this didn't help. To fix this, > I dumped and restored the database. > > 1) why can't postgres delete all rows in a table if it has > millions of rows? > 2) is there any other way to restore performance other than > restoring the database? > > Thanks, > Brian If you are deleting an entire table, then the TRUNCATE command is the way to go. TRUNCATE is very fast and leaves no dead rows behind. The problem with a normal delete is that the rows are not actually removed from the file. Once the table is VACUUMED the dead space is marked as available to be reused, but plain VACUUM doesn't remove any space either. A VACUUM FULL or CLUSTER will actually remove dead space, but they can take a while to run. (I've heard CLUSTER is supposed to be faster than VACUUM FULL) Another way is to create a new table with the same definition as the old table, select the rows you want to keep into the new table, drop the old table, and then rename the new table to have the old table's name. Dave -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance