Re: Deleting millions of rows

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom Lane [tgl@xxxxxxxxxxxxx] wrote:
It's the pending trigger list.  He's got two trigger events per row,
which at 40 bytes apiece would approach 4GB of memory.  Apparently
it's a 32-bit build of Postgres, so he's running out of process address
space.
Yes, this is a 32 bit Postgres running on a 32 bit Linux. I assume that the 2 triggers are due to the 2 "on delete cascade" FKs. Thanks for explaining this bit of a mystery.

TRUNCATE is the best solution if you want to get rid of the whole table
contents.  If you're deleting very many but not all rows, people tend
to drop the FK constraints and re-establish them afterwards.  Retail
checking is just too slow.
Thanks also to you (and several others) for reminding me of TRUNCATE.
This will definitely work for what I was trying to do: reset this table for more testing.

In production, the table on which I ran DELETE FROM grows constantly with old data removed in bunches periodically (say up to a few 100,000s of rows [out of several millions] in a bunch). I'm assuming that auto-vacuum/analyze will allow Postgres to maintain reasonable performance for INSERTs and SELECTs on it; do you think that this is a reasonable assumption?

Thanks,
Brian


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux