On Mon, 2007-07-23 at 17:56 +0200, Csaba Nagy wrote: > Now I don't put too much hope I can convince anybody that the limit on > the delete/update commands has valid usage scenarios, but then can > anybody help me find a good solution to chunk-wise process such a buffer > table where insert speed is the highest priority (thus no indexes, the > minimum of fields), and batch processing should still work fine with big > table size, while not impacting at all the inserts, and finish in short > time to avoid long running transactions ? Cause I can't really think of > one... other than our scheme with the delete with limit + trigger + > private temp table thing. Use partitioning: don't delete, just drop the partition after a while. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com