On 03/18/09 08:06, Simon Riggs wrote: On Wed, 2009-03-18 at 11:45 +0000, Matthew Wakeling wrote:On Wed, 18 Mar 2009, Simon Riggs wrote:I agree with that, apart from the "granting no more" bit. The most useful behaviour is just to have two modes: * exclusive-lock held - all other x locks welcome, s locks queue * shared-lock held - all other s locks welcome, x locks queueThe problem with making all other locks welcome is that there is a possibility of starvation. Imagine a case where there is a constant stream of shared locks - the exclusive locks may never actually get hold of the lock under the "all other shared locks welcome" strategy.That's exactly what happens now.Likewise with the reverse.I think it depends upon how frequently requests arrive. Commits cause X locks and we don't commit that often, so its very unlikely that we'd see a constant stream of X locks and prevent shared lockers. Some comments from an earlier post on this topic (about 20 months ago): Since shared locks are currently queued behind exclusive requests when they cannot be immediately satisfied, it might be worth reconsidering the way LWLockRelease works also. When we wake up the queue we only wake the Shared requests that are adjacent to the head of the queue. Instead we could wake *all* waiting Shared requestors. e.g. with a lock queue like this: (HEAD) S<-S<-X<-S<-X<-S<-X<-S Currently we would wake the 1st and 2nd waiters only. If we were to wake the 3rd, 5th and 7th waiters also, then the queue would reduce in length very quickly, if we assume generally uniform service times. (If the head of the queue is X, then we wake only that one process and I'm not proposing we change that). That would mean queue jumping right? Well thats what already happens in other circumstances, so there cannot be anything intrinsically wrong with allowing it, the only question is: would it help? I thought about that.. Except without putting a restriction a huge queue will cause lot of time spent in manipulating the lock list every time. One more thing will be to maintain two list shared and exclusive and round robin through them for every time you access the list so manipulation is low.. But the best thing is to allow flexibility to change the algorithm since some workloads may work fine with one and others will NOT. The flexibility then allows to tinker for those already reaching the limits. -Jignesh We need not wake the whole queue, there may be some generally more beneficial heuristic. The reason for considering this is not to speed up Shared requests but to reduce the queue length and thus the waiting time for the Xclusive requestors. Each time a Shared request is dequeued, we effectively re-enable queue jumping, so a Shared request arriving during that point will actually jump ahead of Shared requests that were unlucky enough to arrive while an Exclusive lock was held. Worse than that, the new incoming Shared requests exacerbate the starvation, so the more non-adjacent groups of Shared lock requests there are in the queue, the worse the starvation of the exclusive requestors becomes. We are effectively randomly starving some shared locks as well as exclusive locks in the current scheme, based upon the state of the lock when they make their request. The situation is worst when the lock is heavily contended and the workload has a 50/50 mix of shared/exclusive requests, e.g. serializable transactions or transactions with lots of subtransactions. |