Re: cpuinfo, bogomips and duo core

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/02/2013 14:05, Reindl Harald wrote:


Am 12.02.2013 14:37, schrieb Gordan Bobic:
On 12/02/2013 13:24, Reindl Harald wrote:
that just tells that you can disable a lot of services
and overhead in a VM you would never do on bare metal
and it tells that the hypervisor can schedule IO much
more efficient as a generic kernel without less overhead

Utter nonsense. On bare metal, the Linux kernel scheduling even with full awareness of the underlying CPU cores is
pretty poor. C2Q is particularly good example of this because latency and cache misses between the two sets of 2
cores causes a 20%+ drop in throughput compared to pinning heavy processes to a specific core. Systems with
multiple sockets also suffer from this issue particularly badly.

and that is why you use "elevator=noop" inside a VM and disable as
most scheduling in the guest as possible

elevator= is for disk I/O scheduling, it has nothing whatsoever to do with process scheduling.

Now consider that you are effectively hiding the physical CPU layout behind a hypervisor that applies it's
smoke-and-mirrors and makes it even harder for the guest kernel to do something reasonably sensible - so you get
another 20%+ overhead on top from the extra cache misses, extra context switching overheads

what the hell - the guest does not need to do "something reasonably sensible"
if you are doing things right because only the hypervisor can schedule hardware
ressources and with enough RAM on the host there is a lot to optimize

No - there is a lot to not get in the way of and make it worse.

So much experience, so little understanding...

oh yeah that is why i disable most subsystems in the guest-kernel since years
which produce unneeded overheads in a guest and not implemented in a dedicated
hypervisor running on bare-metal or if present optimized for virtualization
and nothing else because the VMkernel has only one job instead implement
a beast adressing any sort of workloads with overheads

Custom kernels in a production environment? Well done. You also need to read up on what the hypervisor actually does under the hood.

Gordan
--
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org


[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux