Re: [PATCH v1 0/5] KVM in-guest performance monitoring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 12, 2011 at 04:31:38PM +0300, Avi Kivity wrote:
> - when the cpu gains support for virtualizing the architectural feature,  
> we transparently speed the guest up, including support for live  
> migrating from a deployment that emulates the feature to a deployment  
> that properly virtualizes the feature, and back.  Usually the  
> virtualized support will beat the pants off any paravirtualization we can 
> do
> - following an existing spec is a lot easier to get right than doing  
> something from scratch
> - no need to meticulously document the feature

Need to be done, but not problematic I think.

> - easier testing

Testing shouldn't be different on both variants I think.

> - existing guest support - only need to write the host side (sometimes  
> the only one available to us)

Otherwise I agree.

> Paravirtualizing does have its advantages.  For the PMU, for example, we  
> can have a single hypercall read and reprogram all counters, saving  
> *many* exits.  But I think we need to start from the architectural PMU  
> and see exactly what the problems are, before we optimize it to death.

The problem certainly is that with arch-pmu we add a lot of msr-exits to
the guest-context-switch path if it uses per-task profiling. Depending
on the workload this can very much distort the results.

	Joerg

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux