On Wed, Jan 19, 2011 at 10:53:52AM -0800, Jeremy Fitzhardinge wrote: > > I didn't really read the patch, and I totally forgot everything from > > when I looked at the Xen series, but does the Xen/KVM hypercall > > interface for this include the vcpu to await the kick from? > > > > My guess is not, since the ticket locks used don't know who the owner > > is, which is of course, sad. There are FIFO spinlock implementations > > that can do this though.. although I think they all have a bigger memory > > footprint. > > At least in the Xen code, a current owner isn't very useful, because we > need the current owner to kick the *next* owner to life at release time, > which we can't do without some structure recording which ticket belongs > to which cpu. If we had a yield-to [1] sort of interface _and_ information on which vcpu owns a lock, then lock-spinners can yield-to the owning vcpu, while the unlocking vcpu can yield-to the next-vcpu-in-waiting. The key here is not to sleep when waiting for locks (as implemented by current patch-series, which can put other VMs at an advantage by giving them more time than they are entitled to) and also to ensure that lock-owner as well as the next-in-line lock-owner are not unduly made to wait for cpu. Is there a way we can dynamically expand the size of lock only upon contention to include additional information like owning vcpu? Have the lock point to a per-cpu area upon contention where additional details can be stored perhaps? 1. https://lkml.org/lkml/2011/1/14/44 - vatsa -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html