> What's the differences wrt retry 1? I'm using git format-patch as you requested. > > This feature creates a new field in the VMCB called Pause > > Filter Count. If Pause Filter Count is greater than 0 and > > intercepting PAUSEs is enabled, the processor will increment > > an internal counter when a PAUSE instruction occurs instead > > of intercepting. When the internal counter reaches the > > Pause Filter Count value, a PAUSE intercept will occur. > > > > This feature can be used to detect contended spinlocks, > > especially when the lock holding VCPU is not scheduled. > > Rescheduling another VCPU prevents the VCPU seeking the > > lock from wasting its quantum by spinning idly. > > > > Experimental results show that most spinlocks are held > > for less than 1000 PAUSE cycles or more than a few > > thousand. Default the Pause Filter Counter to 3000 to > > detect the contended spinlocks. > > 3000. Thanks, I keep missing that. > > On a 24 core system running 4 guests each with 16 VCPUs, > > this patch improved overall performance of each guest's > > 32 job kernbench by approximately 1%. Further performance > > improvement may be possible with a more sophisticated > > yield algorithm. > > > > Like I mentioned earlier, I don't think schedule() does > anything on CFS. > > Try sched_yield(), but set /proc/sys/kernel/sched_compat_yield. Will do. > > +static int pause_interception(struct vcpu_svm *svm, struct > kvm_run *kvm_run) > > +{ > > + /* Simple yield */ > > + vcpu_put(&svm->vcpu); > > + schedule(); > > + vcpu_load(&svm->vcpu); > > + return 1; > > +} > > + > > > > You don't need to vcpu_put() and vcpu_load(). The scheduler > will call them for you if/when it switches tasks. I was waiting for feedback from Ingo on that issue, but I'll try sched_yield() instead. -Mark Langsdorf Operating System Research Center AMD -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html