On Tue, Jun 01, 2010 at 07:24:14PM +0300, Gleb Natapov wrote: > On Tue, Jun 01, 2010 at 05:53:09PM +0200, Andi Kleen wrote: > > Gleb Natapov <gleb@xxxxxxxxxx> writes: > > > > > > The patch below allows to patch ticket spinlock code to behave similar to > > > old unfair spinlock when hypervisor is detected. After patching unlocked > > > > The question is what happens when you have a system with unfair > > memory and you run the hypervisor on that. There it could be much worse. > > > How much worse performance hit could be? It depends on the workload. Overall it means that a contended lock can have much higher latencies. If you want to study some examples see the locking problems the RT people have with their heavy weight mutex-spinlocks. But the main problem is that in the worst case you can see extremly long stalls (upto a second has been observed), which then turns in a correctness issue. > > > Your new code would starve again, right? > > > Yes, of course it may starve with unfair spinlock. Since vcpus are not > always running there is much smaller chance then vcpu on remote memory > node will starve forever. Old kernels with unfair spinlocks are running > fine in VMs on NUMA machines with various loads. Try it on a NUMA system with unfair memory. > > There's a reason the ticket spinlocks were added in the first place. > > > I understand that reason and do not propose to get back to old spinlock > on physical HW! But with virtualization performance hit is unbearable. Extreme unfairness can be unbearable too. -Andi -- ak@xxxxxxxxxxxxxxx -- Speaking for myself only. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html