Chris Angelico wrote: > Correct me if I'm wrong, but wouldn't: > > update some_table set last_used=last_used+1 returning last_used > > simply block until the other transaction is committed/rolled back? Only at the READ COMMITTED transaction isolation level. > That avoids all the issues of serialization AND retrying > transactions. If you can tolerate the occasional oddities of READ COMMITTED transactions, it makes this quite simple, yes. > Or is the delay itself a problem? The delay isn't a problem per se; it's hard to get around some form of blocking if you want transactional integrity and gap-free assignment of numbers. If you can do this sort of assignment near the front of the transaction in REPEATABLE READ or SERIALIZABLE transactions, it doesn't cost that much more than in READ COMMITTED. And the manifestations of weirdness in READ COMMITTED can be daunting. The simplest example looks pretty contrived, but simple examples usually do. Create and populate a simple table: create table rc (id int primary key); insert into rc select generate_series(1, 5); Now, if no records are inserted or deleted by another connection, how many rows will be deleted by this statement?: delete from rc where id = (select min(id) from rc); It's a trick question; the answer depends on a race condition. Before you run that delete, on a separate connection run this: begin; update rc set id = id - 1; Run your delete on the first connection and it will block. Commit the transaction on the second connection, and go look at the results on the first, and select from the table. Rather than make sure that every programmer writing code to go against your database knows exactly where all the problems are and consistently codes around them, it can be a lot easier to use serializable transactions. -Kevin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general