On Sun, Sep 16, 2007 at 01:46:44PM +0300, Panagiwths Pediadiths wrote: > Many transactions doing this in parallel end up inserting the value many > times. Could i be missing something regarding how to set up the isolation > level? > Thanks! No, the only easy way you can guarentee you won't insert duplicates is with a unique index. The reason is that even under serialisable mode your algorithm can produce duplicates, because postgres doesn't do predicate locking. The reason unique indexes can do it is because they have special locking requirements that, when met, guarentee the result. If for some reason you can't use a unique index (can't think of one, but perhaps) then you need to implement this locking yourself. The easiest way is to have one lock and take it before running your procedure. For more concurrency you can partition them (unique indexes can be very efficient about this because they can see rows that you can't; they go outside the normal visibility checks). Have a nice day, -- Martijn van Oosterhout <kleptog@xxxxxxxxx> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
Attachment:
signature.asc
Description: Digital signature