On Tue, Jun 29, 2010 at 04:28:35PM +0300, Avi Kivity wrote: > On 06/29/2010 04:25 PM, Marcelo Tosatti wrote: > > > >>+ smp_call_function_single(vcpu->cpu, > >>+ wbinvd_ipi, NULL, 1); > >>+ } > >>+ > >> kvm_x86_ops->vcpu_load(vcpu, cpu); > >> if (unlikely(per_cpu(cpu_tsc_khz, cpu) == 0)) { > >> unsigned long khz = cpufreq_quick_get(cpu); > >>@@ -3650,6 +3670,21 @@ int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t address) > >> return X86EMUL_CONTINUE; > >> } > >> > >>+int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu) > >>+{ > >>+ if (!need_emulate_wbinvd(vcpu)) > >>+ return X86EMUL_CONTINUE; > >>+ > >>+ if (kvm_x86_ops->has_wbinvd_exit()) { > >>+ smp_call_function_many(vcpu->arch.wbinvd_dirty_mask, > >>+ wbinvd_ipi, NULL, 1); > >work_on_cpu() loop instead of smp_call_function_many(), to avoid executing > >wbinvd with interrupts disabled. > > Why? wbinvd is not interruptible. Right. But still, smp_call_function_many() is going to busy-spin until the target CPUs finish their work, while work_on_cpu() will schedule. Also the IPI request has to handled immediately, bypassing the scheduler. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html