Re: [PATCH v3] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 28 June 2010 11:56:08 Avi Kivity wrote:
> On 06/28/2010 06:36 AM, Sheng Yang wrote:
> > Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly
> > WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD
> > or
> 
> > CLFLUSH, we need to maintain data consistency either by:
> Don't we always force enable snooping?  Or is that only for the
> processor, and you're worried about devices?
 
We only force enabling snooping for capable VT-d engine(with 
KVM_IOMMU_CACHE_COHERENCY flag, on most recent server board). And you're right, 
with the snooping capable VT-d engine we don't need to do all these. Would address 
it in the next version.

> > 1: flushing cache (wbinvd) when the guest is scheduled out if there is no
> > wbinvd exit, or
> > 2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits.
> > 
> >   	/* fields used by HYPER-V emulation */
> >   	u64 hv_vapic;
> > 
> > +
> > +	cpumask_t wbinvd_dirty_mask;
> > 
> >   };
> 
> Need alloc_cpumask_var()/free_cpumask_var() for very large hosts.

OK.
> 
> > +static void wbinvd_ipi(void *garbage)
> > +{
> > +	wbinvd();
> > +}
> 
> Like Jan mentioned, this is quite heavy.  What about a clflush() loop
> instead?  That may take more time, but at least it's preemptible.  Of
> course, it isn't preemptible in an IPI.
 
I think this kind of behavior happened rarely, and most recent processor should 
have WBINVD exit which means it's an IPI... So I think it's maybe acceptable here.

> > +
> > 
> >   void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >   {
> > 
> > +	/* Address WBINVD may be executed by guest */
> > +	if (vcpu->kvm->arch.iommu_domain) {
> > +		if (kvm_x86_ops->has_wbinvd_exit())
> > +			cpu_set(cpu, vcpu->arch.wbinvd_dirty_mask);
> > +		else if (vcpu->cpu != -1)
> > +			smp_call_function_single(vcpu->cpu,
> > +					wbinvd_ipi, NULL, 1);
> 
> Is there any point to doing this if !has_wbinvd_exit()?  The vcpu might
> not have migrated in time, so the cache is flushed too late.

For the !has_wbinvd_exit(), the instruction would be executed by guest and flush 
the current processor immediately. And we can ensure that it's clean in the last 
CPU, so we're fine.
 
> > +	}
> > +
> > 
> >   	kvm_x86_ops->vcpu_load(vcpu, cpu);
> >   	if (unlikely(per_cpu(cpu_tsc_khz, cpu) == 0)) {
> >   	
> >   		unsigned long khz = cpufreq_quick_get(cpu);
> > 
> > @@ -3650,6 +3664,21 @@ int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t
> > address)
> > 
> >   	return X86EMUL_CONTINUE;
> >   
> >   }
> > 
> > +int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu)
> > +{
> > +	if (!vcpu->kvm->arch.iommu_domain)
> > +		return X86EMUL_CONTINUE;
> > +
> > +	if (kvm_x86_ops->has_wbinvd_exit()) {
> > +		smp_call_function_many(&vcpu->arch.wbinvd_dirty_mask,
> > +				wbinvd_ipi, NULL, 1);
> > +		cpus_clear(vcpu->arch.wbinvd_dirty_mask);
> 
> Race - a migration may set a new bit in wbinvd_dirty_mask after the
> s_c_f_m().
> 
> However, it's probably benign, since we won't be entering the guest in
> that period.

Yes. :)

--
regards
Yang, Sheng

> 
> > +	} else
> > +		wbinvd();
> > +	return X86EMUL_CONTINUE;
> > +}
> > +EXPORT_SYMBOL_GPL(kvm_emulate_wbinvd);
> > +
> > 
> >   int emulate_clts(struct kvm_vcpu *vcpu)
> >   {
> >   
> >   	kvm_x86_ops->set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~X86_CR0_TS));
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux