Re: [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23.03.2010, at 09:58, Avi Kivity wrote:

> On 03/23/2010 10:33 AM, Xu, Dongxiao wrote:
>> 
>>> Did you measure workloads that exit to userspace very often?
>>> 
>>> Also, what about future processors?  My understanding is that the
>>> manual recommends keeping things cached, the above description is for
>>> sleep states.
>>>     
>> I measured the performance by using kernel build in guest. I launched 6
>> guests, 5 of them and the host are doing while(1) loop, and the left guest
>> is doing kernel build. The CPU overcommitment is 7:1, and vcpu schedule
>> frequency is about 15k/sec. I tested this with Intel new processors on
>> my hand, and the performance difference is little.
>>   
> 
> The 15k/sec context switches are distributed among 7 entities, so we have about 2k/sec for the guest you are measuring.  If the cost is 1 microsecond, then the impact would be 0.2% on the kernel build.  But 1 microsecond is way too high for some workloads.
> 
> Can you measure the impact directly?  kvm/user/test/x86/vmexit.c has a test called inl_pmtimer that measures exit to userspace costs.  Please run it with and without the patch.
> 
> btw, what about VPID?  That's a global resource.  How do you ensure no VPID conflicts?
> 
>>>>> Is that the only motivation?  It seems like an odd use-case.  If
>>>>> there was no performance impact (current or future), I wouldn't
>>>>> mind, but the design of VMPTRLD/VMCLEAR/VMXON/VMXOFF seems to
>>>>> indicate that we want to keep a VMCS loaded as much as possible on
>>>>> the processor.
>>>>> 
>>>>>         
>>>> I just used KVM and VMware Workstation 7 for testing this patchset.
>>>> 
>>>> Through this new usage of VMPTRLD/VMCLEAR/VMXON/VMXOFF,
>>>> we could make hosted VMMs work separately and do not impact each
>>>> other.
>>>> 
>>>>       
>>> What I am questioning is whether a significant number of users want to
>>> run kvm in parallel with another hypervisor.
>>>     
>> At least this approach gives users an option to run VMMs in parallel without
>> significant performance loss. Think of this senario, if a server has already
>> Deployed VMware software, but some new customers want to use KVM,
>> this patch could help them to meet their requirements.
>>   
> 
> For server workloads vmware users will run esx, on which you can't run kvm.  If someone wants to evaluate kvm or vmware on a workstation, they can shut down the other product.  I simply don't see a scenario where you want to run both concurrently that would be worth even a small performance loss.

I can certainly see value for some people. I just don't think we should burden every user with the performance penalty. Hence my suggestion to default this behavior to off.

1% might not sound a lot, but people have worked pretty hard optimizing stuff for less :-).


Alex--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux