On Thu, Apr 3, 2008 at 12:58 PM, Craig Ringer <craig@xxxxxxxxxxxxxxxxxxxxx> wrote: > Scott Marlowe wrote: > > > Sure, but you have to trap that all the time. The solution using a > > cycling sequence keeps you from ever seeing that (unless you managed > > to check out all 9,999 other values while still getting the current > > one. No locking needed, dozens of updaters running concurrently and > > no need to track update errors. > > > > > Yep, that does sound like it'd be nicer, at least if locks are becoming > free at a reasonable rate (ie you don't have to step through most of the > table to find a free lock). I was working on the probably mistaken > assumption that the OP wanted the "next" / "first" available slot, not any > free slot. > > If there are very few free locks at any given time I have the feeling the > sequence approach could spend a lot of time just scanning through the table > looking for free entries. Then again, using an aggregate subquery is far > from free either, and it's a whole lot nicer to just repeat one statement > until it succeeds rather than retrying the whole transaction if it conflicts > with another (which will happen often if there's really high demand for > locks). > > In fact, both transactions trying to grab the lowest free lock is > practically a recipe for serialization failures, making it even less > attractive. With only two concurrent connections it'd work OK if one used > min() and the other used max() ... but add another couple and you're in > trouble. > > The serial based approach sounds a fair bit better. Add prepared select statements and you'd get get pretty fast performance. I'd hope it's a situation where most blocks ARE free. Note that I based my whole argument that if a number down lower could become free then gaps weren't a problem. If gaps somehow are a problem, then there's a whole other set of approaches for providing the illusion of gap free ids without all the cost.