Re: [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/23/2010 06:01 AM, Xu, Dongxiao wrote:
Avi Kivity wrote:
On 03/18/2010 11:49 AM, Xu, Dongxiao wrote:
VMX: Support for coexistence of KVM and other hosted VMMs.

The following NOTE is picked up from Intel SDM 3B 27.3 chapter,
MANAGING VMCS REGIONS AND POINTERS.

----------------------
NOTE
As noted in Section 21.1, the processor may optimize VMX operation
by maintaining the state of an active VMCS (one for which VMPTRLD
has been executed) on the processor. Before relinquishing control to
other system software that may, without informing the VMM, remove
power from the processor (e.g., for transitions to S3 or S4) or leave
VMX operation, a VMM must VMCLEAR all active VMCSs. This ensures
that all VMCS data cached by the processor are flushed to memory
and that no other software can corrupt the current VMM's VMCS data.
It is also recommended that the VMM execute VMXOFF after such
executions of VMCLEAR. ----------------------

Currently, VMCLEAR is called at VCPU migration. To support hosted
VMM coexistence, this patch modifies the VMCLEAR/VMPTRLD and
VMXON/VMXOFF usages. VMCLEAR will be called when VCPU is
scheduled out of a physical CPU, while VMPTRLD is called when VCPU
is scheduled in a physical CPU. Also this approach could eliminates
the IPI mechanism for original VMCLEAR. As suggested by SDM,
VMXOFF will be called after VMCLEAR, and VMXON will be called
before VMPTRLD.

My worry is that newer processors will cache more and more VMCS
contents on-chip, so the VMCLEAR/VMXOFF will cause a greater loss
with newer processors.
Based on our intenal testing, we saw less than 1% of performance
differences even on such processors.

Did you measure workloads that exit to userspace very often?

Also, what about future processors? My understanding is that the manual recommends keeping things cached, the above description is for sleep states.

With this patchset, KVM and VMware Workstation 7 could launch
serapate guests and they can work well with each other. Besides, I
measured the performance for this patch, there is no visable
performance loss according to the test results.

Is that the only motivation?  It seems like an odd use-case.  If there
was no performance impact (current or future), I wouldn't mind, but
the design of VMPTRLD/VMCLEAR/VMXON/VMXOFF seems to indicate that we
want to keep a VMCS loaded as much as possible on the processor.
I just used KVM and VMware Workstation 7 for testing this patchset.

Through this new usage of VMPTRLD/VMCLEAR/VMXON/VMXOFF,
we could make hosted VMMs work separately and do not impact each
other.

What I am questioning is whether a significant number of users want to run kvm in parallel with another hypervisor.

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux