Re: KVM timekeeping and TSC virtualization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 08/23/10 23:47, Zachary Amsden wrote:
> I've heard the rumor that TSC is orders of magnitude faster under VMware
> than under KVM from three people now, and I thought you were part of
> that camp.
> 
> Needless to say, they are either laughably incorrect, or possess some
> great secret knowledge of how to make things under virtualization go
> faster than bare metal.
> 
> I also have a magical talking unicorn, which, btw, is invisible. 
> Extraordinary claims require extraordinary proof (the proof of my
> unicorn is too complex to fit in the margin of this e-mail, however, I
> assure you he is real).

I have put in a lot of time over the past 3 years to understand how the
'magic' of virtualization works; please don't lump me into camps until I
raise my hand as being part of one.


>> My point is that kvmclock is Red Hat's answer for the future -- RHEL6,
>> RHEL5.Y (whenever it proves reliable). What about the present?  What
>> about products based on other distributions newer than RHEL5 but
>> pre-kvmclock?
>>    
> 
> It should be obvious from this patchset... PIT or TSC.
> 
> KVM did not have an in-kernel PIT implementation circa 2008, so this
> data is quite old.  It's much faster now and will continue to get faster
> as exit cost goes down and the emulation gets further optimized.

It was in-kernel pit in early 2008 (kernel git entry):

commit 7837699fa6d7adf81f26ab73a5f6897ea1ab9d6a
Author: Sheng Yang <sheng.yang@xxxxxxxxx>
Date:   Mon Jan 28 05:10:22 2008 +0800

    KVM: In kernel PIT model


> 
> Plus, now we have an error-free TSC.
> 
>> There are a lot of moving windows of what to use as a clock source, not
>> just per major number (RHEL4, RHEL5) but minor number (e.g., TSC
>> stability on RHEL4 -- e.g.,
>> https://bugzilla.redhat.com/show_bug.cgi?id=491154) and further
>> maintenance releases (kvmclock requiring RHEL5.5+). That is not very
>> friendly to a product making a transition to virtualization - and with
>> the same code base running bare metal or in a VM.
>>    
> 
> If you have old software running on broken hardware you do not get
> hardware performance and error-free time virtualization.  With any
> vendor.  Period.

Sucks to be old *and* broken. But old with fancy new wheels, er hardware
-- like commodity x86 servers running Nehalem-based processors -- is a
different story.

> 
> With this patchset, KVM now has a much stronger guarantee: If you have
> old guest software running on broken hardware, using SMP virtual
> machines, you do not get hardware performance and error-free time
> virtualization.    However, if you have new guest software, non-broken
> hardware, or can simply run UP guests instead of SMP, you can have
> hardware performance, and it is now error free.  Alternatively, you can
> sacrifice some accuracy and have hardware performance, even for SMP
> guests, if you can tolerate some minor cross-CPU TSC variation.  No
> other vendor I know of can make that guarantee.
> 
> Zach

If the processor has a stable TSC why trap it? I realize you are trying
to cover a gauntlet of hardware and guests, so maybe a nerd knob is
needed to disable.

David
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux