On Mon, Feb 2, 2009 at 11:17 AM, Brian Cox <brian.cox@xxxxxx> wrote: > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from > ts_defects; > Result: out of memory/Can't allocate size: 32 > I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into > or queries on this > table performed significantly slower. I tried a vacuum analyze, but this > didn't help. There are two different problems happening here. One is the failed delete, the other is the normal bloating caused when a lot of rows are deleted. When deleting every row in a table you're much better off just truncating it. But foreign keys can get in the way so you might need truncate cascade. If you're not sure you really want to do it you can wrap your truncate in a begin;commit; pair and see how the database looks after the truncate. If you choose to use a delete, then foreign keys can slow things down quite a bit, and if you've got bad stats it's possible for the planner to choose a plan that runs out of memory. Was this db recently analyzed? If the delete is what you need for some reason, a regular vacuum won't fix your problem, because it only makes dead tuples available again, it doesn't remove them. A cluster command OR vacuum full followed by reindex are the two best ways to get the space recovered. > To fix this, > I dumped and restored the database. That works too. Since the table was empty, you could have dropped and recreated it, but if you had foreign keys you'd have to recreate them too. > 1) why can't postgres delete all rows in a table if it has millions of rows? It works fine for me. Often into the billions. Your test case seems out of the ordinary. Can you post all the non-default values in your postgresql.conf / alter database set ... settings? -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance