[PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



VMX: Support for coexistence of KVM and other hosted VMMs. 

The following NOTE is picked up from Intel SDM 3B 27.3 chapter, 
MANAGING VMCS REGIONS AND POINTERS.

----------------------
NOTE
As noted in Section 21.1, the processor may optimize VMX operation
by maintaining the state of an active VMCS (one for which VMPTRLD
has been executed) on the processor. Before relinquishing control to
other system software that may, without informing the VMM, remove
power from the processor (e.g., for transitions to S3 or S4) or leave
VMX operation, a VMM must VMCLEAR all active VMCSs. This ensures
that all VMCS data cached by the processor are flushed to memory
and that no other software can corrupt the current VMM's VMCS data.
It is also recommended that the VMM execute VMXOFF after such
executions of VMCLEAR.
----------------------

Currently, VMCLEAR is called at VCPU migration. To support hosted
VMM coexistence, this patch modifies the VMCLEAR/VMPTRLD and
VMXON/VMXOFF usages. VMCLEAR will be called when VCPU is
scheduled out of a physical CPU, while VMPTRLD is called when VCPU
is scheduled in a physical CPU. Also this approach could eliminates
the IPI mechanism for original VMCLEAR. As suggested by SDM,
VMXOFF will be called after VMCLEAR, and VMXON will be called
before VMPTRLD.

With this patchset, KVM and VMware Workstation 7 could launch
serapate guests and they can work well with each other. Besides, I
measured the performance for this patch, there is no visable
performance loss according to the test results.

The following performance results are got from a host with 8 cores.
 
1. vConsolidate benchmarks on KVM
  
Test Round	WebBench	SPECjbb	SysBench	LoadSim	GEOMEAN 
1 W/O patch 	2,614.72 	28,053.09 	1,108.41 	16.30 		1,072.95 
   W/ patch 	2,691.55 	28,145.71 	1,128.41 	16.47 		1,089.28 
2 W/O patch 	2,642.39 	28,104.79 	1,096.99 	17.79 		1,097.19 
   W/ patch 	2,699.25 	28,092.62 	1,116.10 	15.54 		1,070.98 
3 W/O patch 	2,571.58 	28,131.17 	1,108.43 	16.39 		1,070.70 
   W/ patch 	2,627.89 	28,090.19 	1,110.94 	17.00 		1,086.57 

Average
W/O patch 	2,609.56 	28,096.35 	1,104.61 	16.83 		1,080.28 
W/ patch 	2,672.90 	28,109.51 	1,118.48 	16.34 		1,082.28 

2. CPU overcommitment tests for KVM

A) Run 8 while(1) in host which pin with 8 cores.
B) Launch 6 guests, each has 8 VCPUs, pin each VCPU with one core.
C) Among the 6 guests, 5 of them are running 8*while(1).
D) The left guest is doing kernel build "make -j9" under ramdisk.

In this case, the overcommitment ratio for each core is 7:1.
The VCPU schedule frequency on all cores is totally ~15k/sec.
l record the kernel build time.
 
While doing the average, the first round data is treated as invalid,
which isn't counted into the final average result.
 
Kernel Build Time (second) 
Round 		w/o patch 	w/ patch 
1 		541 		501 
2 		488 		490 
3 		488 		492 
4 		492 		493 
5 		489 		491 
6 		494 		487 
7 		497 		494 
8 		492 		492 
9 		493 		496 
10 		492 		495 
11 		490 		496 
12 		489 		494 
13 		489 		490 
14 		490 		491 
15 		494 		497 
16 		495 		496 
17 		496 		496 
18 		493 		492 
19 		493 		500 
20 		490 		499 

Average 	491.79 	493.74
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux