>----- Original Message ---- >From: Tom Lane <tgl@xxxxxxxxxxxxx> >To: Csaba Nagy <nagy@xxxxxxxxxxxxxx> >Cc: Postgres general mailing list <pgsql-general@xxxxxxxxxxxxxx> >Sent: Friday, May 19, 2006 9:31:24 AM >Subject: Re: [GENERAL] allow LIMIT in UPDATE and DELETE > >You can't possibly think that that holds true in general. > >I can tolerate nondeterminism in SELECT because it doesn't change the >data. If you get it wrong you can always do it over. UPDATE/DELETE >need to have higher standards though. > > regards, tom lane The usage Csaba is referring to seems to be pretty common practice in the world of Oracle. If I need to purge 5-10 million rows from a non-partitioned table on a regular basis (e.g: archiving) I'm going to use delete in conjunction with an appropriate where clause (typically something like less than some sequence number or date) and tack a "rownum<X" (where X is some fairly large constant) on the end so that the delete is done in chunks. I'll commit immediately afterwards and loop until sql%rowcount<X indicating that I'm finsihed. Now the question... why would you do that instead of doing everything in one big transaction on Oracle? I guess performance is one reason. Oracle's (at least with 8/8i) performance seems to tank very quickly on deletes as you increase the number of records you delete in a single transaction. The other (at least with my understanding of Oracle internals) is that using smaller transactions will mean less rollback segment space used which reduces the likelyhood of your transaction getting killed due to Oracle running out of rollback space on a database that has heavy usage. Since Postgresql's MVCC system doesn't require rollback segments I guess the performance question is the more important of the two. If there could be a performance gain in Postgresql with "delete from X where Y limit Z" vs "delete from X where Y in (some subselect limit Z)" I'd think Csaba suggestion has some merit. Regards, Shelby Cain