On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote: > On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote: > > > > There are two separate problems: the more general problem is that > > the hypervisor can put a vcpu to sleep while holding a lock, causing > > other vcpus to spin until the end of their time slice. This can > > only be addressed with hypervisor help. > > Fyi - I have a early patch ready to address this issue. Basically I am using > host-kernel memory (mmap'ed into guest as io-memory via ivshmem driver) to hint > host whenever guest is in spin-lock'ed section, which is read by host scheduler > to defer preemption. Looks like a ni.ce simple way to handle this for the kernel. However I suspect user space will hit the same issue sooner or later. I assume your way is not easily extensable to futexes? > One pathological case where this may actually hurt is routines in guest like > flush_tlb_others_ipi() which take a spinlock and then enter a while() loop > waiting for other cpus to ack something. In this case, deferring preemption just > because guest is in critical section actually hurts! Hopefully the upper bound > for deferring preemtion and the fact that such routines may not be frequently > hit should help alleviate such situations. So do you defer during the whole spinlock region or just during the spin? I assume the the first? -Andi -- ak@xxxxxxxxxxxxxxx -- Speaking for myself only. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html