"Raji Sridar (raji)" <raji@xxxxxxxxx> wrote: > > We use a typical counter within a transaction to generate order sequence number and update the next sequence number. This is a simple next counter - nothing fancy about it. When multiple clients are concurrently accessing this table and updating it, under extermely heavy loads in the system (stress testing), we find that the same order number is being generated for multiple clients. Could this be a bug? Is there a workaround? Please let me know. As others have said: using a sequence/serial is best, as long as you can deal with gaps in the generated numbers. (note that in actual practice, the number of gaps is usually very small.) Without seeing the code, here's my guess as to what's wrong: You take out a write lock on the table, then acquire the next number, then release the lock, _then_ insert the new row. Doing this allows a race condition between number generation and insertion which could allow duplicates. Am I right? Did I guess it? If so, you need to take out the lock on the table and hold that lock until you've inserted the new row. If none of these answers help, you're going to have to show us your code, or at least a pared down version that exhibits the problem. [I'm stripping off the performance list, as this doesn't seem like a performance question.] -- Bill Moran http://www.potentialtech.com -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general