Re: ESXi, KVM or Xen?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Emmanuel Noobadmin wrote:
>> if by 'put storage on the network' you mean using a block-level
>> protocol (iSCSI, FCoE, AoE, NBD, DRBD...), then you should by all
>> means initiate on the host OS (Dom0 in Xen) and present to the VM as
>> if it were local storage.  it's far faster and more stable that way.
>> in that case, storage wouldn't add to the VM's network load, which
>> might or might not make those (old) scenarios irrelevant
> 
> Thanks for that tip :)
> 
>> in any case, yes; Xen does have more maturity on big hosting
>> deployments.  but most third parties are betting on KVM for the
>> future.  the biggest examples are Redhat, Canonical, libvirt (which is
>> sponsored by redhat), and Eucalyptus (which reimplements amazon's EC2
>> with either Xen or KVM, focusing on the last) so the gap is closing.
> 
> This is what I figured too, hence not a straightforward choice. I
> don't need top notch performance for most of the servers targeted for
> virtualization. Loads are usually low except on the mail servers and
> often only when there's a mail loop problem. So if the performance hit
> under worse case situation is only 10~20%, it's something I can live
> with. Especially since the intended VM servers (i5/i7) will be
> significantly faster than the current ones (P4/C2D) I'm basing the my
> estimates on.
> 
> But I need to do my due dilligence and have justification ready to
> show that current performance/reliability/security is at least "good
> enough" instead of "I like where KVM is going and think it'll be the
> platform of choice in the years to come". Bosses and clients tend to
> frown on that kind of thing :D

How much customization will you apply on your virtualization
infrastructure? If you can manage to do the majority via proper
hypervisor abstraction, specifically libvirt, you will actually have
quite some freedom in choosing the platform. If not, I would very
carefully look at the management interfaces of all those hypervisors,
how much they conform to standard administration procedures or what
specialties they require, both on host and guest side.

> 
>> and finally, even if right now the 'best' deployment on Xen definitely
>> outperforms KVM by a measurable margin; when things are not optimal
>> Xen degrades a lot quicker than KVM.  in part because the Xen core
>> scheduler is far from the maturity of Linux kernel's scheduler.
> 
> The problem is finding stats to back that up if my clients/boss ask
> about it. So far most of the available comparisons/data seem rather
> dated, mostly 2007 and 2008. The most "professional" looking one, in
> that PDF I linked to, seems to indicate the opposite, i.e. KVM
> degrades faster when things go south. That graph with the Apache
> problem is especially damning because our primary product/services are
> web-based applications, infrastructure being a supplement
> service/product.
> 
> In addition, I remember reading a thread on this list where an Intel
> developer pointed out that the Linux scheduler causes performance hit,
> about 8x~10x slower when the physical processors are heavily loaded
> and there are more vCPU than pCPU when it puts the same VM's vCPUs
> into the same physical core.

That's only relevant if you run SMP guests on over-committed hosts. How
will your guests look like?

> 
> So I am a little worried since 8~10x is massive difference, esp if
> some process goes awry, starts chewing up processor cycles and the VM
> starts to lag because of this. A vicious cycle that makes it even
> harder to fix things without killing the VM.
> 
> Of course if I could honestly tell my clients/boss "This, this and
> this are rare situations we will almost never encounter...", then it's
> a different thing. Hence asking about this here :)

All solutions have weak points. The point is indeed to estimate if your
use cases will trigger then. Still then, the question remains if some
weakness is inherent to the solution's design or likely to be fixed
quicker than you will actually hit it. And weaknesses may not only be
performance aspects.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux