Scott Marlowe wrote: Sure, but you have to trap that all the time. The solution using a cycling sequence keeps you from ever seeing that (unless you managed to check out all 9,999 other values while still getting the current one. No locking needed, dozens of updaters running concurrently and no need to track update errors. Yep, that does sound like it'd be nicer, at least if locks are becoming free at a reasonable rate (ie you don't have to step through most of the table to find a free lock). I was working on the probably mistaken assumption that the OP wanted the "next" / "first" available slot, not any free slot. If there are very few free locks at any given time I have the feeling the sequence approach could spend a lot of time just scanning through the table looking for free entries. Then again, using an aggregate subquery is far from free either, and it's a whole lot nicer to just repeat one statement until it succeeds rather than retrying the whole transaction if it conflicts with another (which will happen often if there's really high demand for locks). In fact, both transactions trying to grab the lowest free lock is practically a recipe for serialization failures, making it even less attractive. With only two concurrent connections it'd work OK if one used min() and the other used max() ... but add another couple and you're in trouble. The serial based approach sounds a fair bit better.
Serial access to the firewall is what I'm now emulating with "LOCK bw_pool" as the first statement in the transaction. AFAIK Postgres' serializable transactions give you decent parallelism at the price of expecting you to retry them due to serialization errors, and thus cannot be relied upon for doing actions on a shared external resource (like manipulating the firewall) after having LOCKed some table. It's a pity there's no way to let go of the lock as soon as possible, only implicitly at the end of the transaction.