Re: Deleting millions of rows

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 3, 2009 at 4:17 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
> Alvaro Herrera <alvherre@xxxxxxxxxxxxxxxxx> writes:
>> Robert Haas escribió:
>>> Have you ever given any thought to whether it would be possible to
>>> implement referential integrity constraints with statement-level
>>> triggers instead of row-level triggers?
>
>> Well, one reason we haven't discussed this is because our per-statement
>> triggers are too primitive yet -- we don't have access to the list of
>> acted-upon tuples.  As soon as we have that we can start discussing this
>> optimization.
>
> I think the point is that at some number of tuples it's better to forget
> about per-row tests at all, and instead perform the same whole-table
> join that would be used to validate the FK from scratch.  The mechanism
> we lack is not one to pass the row list to a statement trigger, but one
> to smoothly segue from growing a list of per-row entries to dropping
> that list and queueing one instance of a statement trigger instead.

That's good if you're deleting most or all of the parent table, but
what if you're deleting 100,000 values from a 10,000,000 row table?
In that case maybe I'm better off inserting all of the deleted keys
into a side table and doing a merge or hash join between the side
table and the child table...

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux