Re: [PATCH v3] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 28 June 2010 15:08:56 Avi Kivity wrote:
> On 06/28/2010 09:56 AM, Sheng Yang wrote:
> > On Monday 28 June 2010 14:56:38 Avi Kivity wrote:
> >> On 06/28/2010 09:42 AM, Sheng Yang wrote:
> >>>>> +static void wbinvd_ipi(void *garbage)
> >>>>> +{
> >>>>> +	wbinvd();
> >>>>> +}
> >>>> 
> >>>> Like Jan mentioned, this is quite heavy.  What about a clflush() loop
> >>>> instead?  That may take more time, but at least it's preemptible.  Of
> >>>> course, it isn't preemptible in an IPI.
> >>> 
> >>> I think this kind of behavior happened rarely, and most recent
> >>> processor should have WBINVD exit which means it's an IPI... So I
> >>> think it's maybe acceptable here.
> >> 
> >> Several milliseconds of non-responsiveness may not be acceptable for
> >> some applications.  So I think queue_work_on() and a clflush loop is
> >> better than an IPI and wbinvd.
> > 
> > OK... Would update it in the next version.
> 
> Hm, the manual says (regarding clflush):
> > Invalidates the cache line that contains the linear address specified
> > with the source
> > operand from all levels of the processor cache hierarchy (data and
> > instruction). The
> > invalidation is broadcast throughout the cache coherence domain. If,
> > at any level of
> > the cache hierarchy, the line is inconsistent with memory (dirty) it
> > is written to
> > memory before invalidation.
> 
> So I don't think you need to queue_work_on(), instead you can work in
> vcpu thread context.  But better check with someone that really knows.

Yeah, I've just checked the instruction as well. For it would be boardcasted, 
seems we even don't need(and can't have) a dirty bitmap. So the overhead on the 
large machine should be big.

And I've calculated the times we need to execute clflush for whole guest memory. If 
I calculate it right, for a 64bit guest, clflush can only cover 64 bytes one time, 
so for a typical 4G guest, we would need to execute the command for 4G / 64 = 64M 
times. The cycles used by clflush can be vary, suppose it would use 10 cycles each 
(which sounds impossible, for involving boardcast and writeback, and not including 
cache refill time for all processors), it would cost more than 0.2 seconds one time 
on an 3.2Ghz machine...

--
regards
Yang, Sheng


--
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux