Re: [PATCH] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sheng Yang wrote:
> Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly
> WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD or
> CLFLUSH, we need to maintain data consistency either by:
> 1: flushing cache (wbinvd) when the guest is scheduled out if there is no
> wbinvd exit, or
> 2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits.
> 
> For wbinvd VMExit capable processors, we issue IPIs to all physical CPUs to
> do wbinvd, for we can't easily tell which physical CPUs are "dirty".

wbinvd is a heavy weapon in the hands of a guest. Even if it is limited
to pass-through scenarios, do we really need to bother all physical host
CPUs with potential multi-millisecond stalls? Think of VMs only running
on a subset of CPUs (e.g. to isolate latency sources). I would suggest
to track the physical CPU usage of VCPUs between two wbinvd requests and
only send the wbinvd IPI to that set.

Also, I think the code is still too much vmx-focused. Only the trapping
should be vendor specific, the rest generic.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux