Cc Paolo, kvm ml 2016-07-06 12:58 GMT+08:00 xinhui <xinhui.pan@xxxxxxxxxxxxxxxxxx>: > Hi, wanpeng > > On 2016年07月05日 17:57, Wanpeng Li wrote: >> >> Hi Xinhui, >> 2016-06-28 22:43 GMT+08:00 Pan Xinhui <xinhui.pan@xxxxxxxxxxxxxxxxxx>: >>> >>> This is to fix some lock holder preemption issues. Some other locks >>> implementation do a spin loop before acquiring the lock itself. Currently >>> kernel has an interface of bool vcpu_is_preempted(int cpu). It take the >>> cpu >>> as parameter and return true if the cpu is preempted. Then kernel can >>> break >>> the spin loops upon on the retval of vcpu_is_preempted. >>> >>> As kernel has used this interface, So lets support it. >>> >>> Only pSeries need supoort it. And the fact is powerNV are built into same >>> kernel image with pSeries. So we need return false if we are runnig as >>> powerNV. The another fact is that lppaca->yiled_count keeps zero on >>> powerNV. So we can just skip the machine type. >> >> >> Lock holder vCPU preemption can be detected by hardware pSeries or >> paravirt method? >> > There is one shard struct between kernel and powerVM/KVM. And we read the > yield_count of this struct to detect if one vcpu is running or not. > SO it's easy for ppc to implement such interface. Note that yield_count is > set by powerVM/KVM. > and only pSeries can run a guest for now. :) > > I also review x86 related code, looks like we need add one hyer-call to get > such vcpu preemption info? There is no such stuff to record lock holder in x86 kvm, maybe we don't need to depend on PLE handler algorithm to guess it if we can know lock holder vCPU directly. Regards, Wanpeng Li -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html