nOn 06/28/2010 11:42 AM, Sheng Yang wrote:
Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD or CLFLUSH, we need to maintain data consistency either by: 1: flushing cache (wbinvd) when the guest is scheduled out if there is no wbinvd exit, or 2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits. +int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu) +{ + if (need_emulate_wbinvd(vcpu)&& kvm_x86_ops->has_wbinvd_exit()) { + smp_call_function_many(vcpu->arch.wbinvd_dirty_mask, + wbinvd_ipi, NULL, 1); + cpumask_clear(vcpu->arch.wbinvd_dirty_mask); + } else + wbinvd(); + return X86EMUL_CONTINUE; +} +EXPORT_SYMBOL_GPL(kvm_emulate_wbinvd);
Why check for has_wbinvd_exit()? If it's false, we don't get here anyway. If !need_emulate_wbinvd(), why call wbinvd()?
+ int emulate_clts(struct kvm_vcpu *vcpu) { kvm_x86_ops->set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~X86_CR0_TS)); @@ -5255,6 +5287,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) vcpu->arch.time_page = NULL; } + free_cpumask_var(vcpu->arch.wbinvd_dirty_mask); fx_free(vcpu); kvm_x86_ops->vcpu_free(vcpu); } @@ -5262,7 +5295,16 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) { - return kvm_x86_ops->vcpu_create(kvm, id); + struct kvm_vcpu *vcpu = kvm_x86_ops->vcpu_create(kvm, id); + + if (IS_ERR(vcpu)) + return vcpu; + + if (!zalloc_cpumask_var(&vcpu->arch.wbinvd_dirty_mask, GFP_KERNEL)) { + kvm_x86_ops->vcpu_free(vcpu); + return ERR_PTR(-ENOMEM); + } + return vcpu; }
Better place is kvm_arch_vcpu_init(). -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html