On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox <brian.cox@xxxxxx> wrote: > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from > ts_defects; > Result: out of memory/Can't allocate size: 32 > I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into > or queries on this > table performed significantly slower. I tried a vacuum analyze, but this > didn't help. To fix this, > I dumped and restored the database. > > 1) why can't postgres delete all rows in a table if it has millions of rows? > 2) is there any other way to restore performance other than restoring the > database? Does the table have triggers on it? Does it have indexes? What is the result of pg_relation_size() on that table? How much memory do you have in your machine? What is work_mem set to? Did you try VACUUM FULL instead of just plain VACUUM to recover performance? You might also need to REINDEX. Or you could TRUNCATE the table. ...Robert -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance