On Mon, 2016-06-06 at 17:59 +0200, Peter Zijlstra wrote: > On Fri, Jun 03, 2016 at 02:33:47PM +1000, Benjamin Herrenschmidt wrote: > > > > - For the above, can you show (or describe) where the qspinlock > > improves things compared to our current locks. > So currently PPC has a fairly straight forward test-and-set spinlock > IIRC. You have this because LPAR/virt muck and lock holder preemption > issues etc.. > qspinlock is 1) a fair lock (like ticket locks) and 2) provides > out-of-word spinning, reducing cacheline pressure. Thanks Peter. I think I understand the theory, but I'd like see it translate into real numbers. > Esp. on multi-socket x86 we saw the out-of-word spinning being a big win > over our ticket locks. > > And fairness, brought to us by the ticket locks a long time ago, > eliminated starvation issues we had, where a spinner local to the holder > would 'always' win from a spinner further away. So under heavy enough > local contention, the spinners on 'remote' CPUs would 'never' get to own > the lock. I think our HW has tweaks to avoid that from happening with the simple locks in the underlying ll/sc implementation. In any case, what I'm asking is actual tests to verify it works as expected for us. > pv-qspinlock tries to preserve the fairness while allowing limited lock > stealing and explicitly managing which vcpus to wake. Right. > > > > While there's > > theory and to some extent practice on x86, it would be nice to > > validate the effects on POWER. > Right; so that will have to be from benchmarks which I cannot help you > with ;-) Precisely :-) This is what I was asking for ;-) Cheers, Ben. _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization