Re: [PATCH v3] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/28/2010 06:36 AM, Sheng Yang wrote:
Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly
WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD or
CLFLUSH, we need to maintain data consistency either by:

Don't we always force enable snooping? Or is that only for the processor, and you're worried about devices?

1: flushing cache (wbinvd) when the guest is scheduled out if there is no
wbinvd exit, or
2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits.




  	/* fields used by HYPER-V emulation */
  	u64 hv_vapic;
+
+	cpumask_t wbinvd_dirty_mask;
  };


Need alloc_cpumask_var()/free_cpumask_var() for very large hosts.


+static void wbinvd_ipi(void *garbage)
+{
+	wbinvd();
+}

Like Jan mentioned, this is quite heavy. What about a clflush() loop instead? That may take more time, but at least it's preemptible. Of course, it isn't preemptible in an IPI.

+
  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
  {
+	/* Address WBINVD may be executed by guest */
+	if (vcpu->kvm->arch.iommu_domain) {
+		if (kvm_x86_ops->has_wbinvd_exit())
+			cpu_set(cpu, vcpu->arch.wbinvd_dirty_mask);
+		else if (vcpu->cpu != -1)
+			smp_call_function_single(vcpu->cpu,
+					wbinvd_ipi, NULL, 1);

Is there any point to doing this if !has_wbinvd_exit()? The vcpu might not have migrated in time, so the cache is flushed too late.

+	}
+
  	kvm_x86_ops->vcpu_load(vcpu, cpu);
  	if (unlikely(per_cpu(cpu_tsc_khz, cpu) == 0)) {
  		unsigned long khz = cpufreq_quick_get(cpu);
@@ -3650,6 +3664,21 @@ int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t address)
  	return X86EMUL_CONTINUE;
  }

+int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu)
+{
+	if (!vcpu->kvm->arch.iommu_domain)
+		return X86EMUL_CONTINUE;
+
+	if (kvm_x86_ops->has_wbinvd_exit()) {
+		smp_call_function_many(&vcpu->arch.wbinvd_dirty_mask,
+				wbinvd_ipi, NULL, 1);
+		cpus_clear(vcpu->arch.wbinvd_dirty_mask);

Race - a migration may set a new bit in wbinvd_dirty_mask after the s_c_f_m().

However, it's probably benign, since we won't be entering the guest in that period.

+	} else
+		wbinvd();
+	return X86EMUL_CONTINUE;
+}
+EXPORT_SYMBOL_GPL(kvm_emulate_wbinvd);
+
  int emulate_clts(struct kvm_vcpu *vcpu)
  {
  	kvm_x86_ops->set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~X86_CR0_TS));


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux