Search Postgresql Archives

Re: Delete/update with limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Csaba Nagy" <nagy@xxxxxxxxxxxxxx> writes:

>> Unfortunately the stuff that makes a ctid=<value> nice doesn't seem to be
>> used when you're doing an in. It's possible that a function that does
>> something like
>>  for rec in select ctid from my_table limit 10 loop
>>   delete from my_table where ctid=rec.ctid;
>>  end loop
>> might do okay, but I haven't tried it.
>
> OK, I think this will work. It would be nice though to have the 'ctid
> in' trick work just as well as 'ctid = ' ...

Unfortunately I don't think this will work. Multiple backends will happily
pick up the same ctid in their selects and then try to delete the same
records. 

The second backend to get to a record to do the delete will have to block on
the first backend's lock destroying the parallelism you were hoping for. When
the first backend commits it will find the record deleted and end up finding
fewer records in its workset than the limit you specified.

I think you can make it work reasonably well by making each worker go and
update a field in the records it wants to process to indicate it has "grabbed"
them. Commit that. then go back and process them. Then go back and update them
again to delete them. But then you need some facility for dealing after a
crash with finding grabbed records which were never processed.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux