On 12/02/2013 14:05, Reindl Harald wrote:
Am 12.02.2013 14:37, schrieb Gordan Bobic:
On 12/02/2013 13:24, Reindl Harald wrote:
that just tells that you can disable a lot of services
and overhead in a VM you would never do on bare metal
and it tells that the hypervisor can schedule IO much
more efficient as a generic kernel without less overhead
Utter nonsense. On bare metal, the Linux kernel scheduling even with full awareness of the underlying CPU cores is
pretty poor. C2Q is particularly good example of this because latency and cache misses between the two sets of 2
cores causes a 20%+ drop in throughput compared to pinning heavy processes to a specific core. Systems with
multiple sockets also suffer from this issue particularly badly.
and that is why you use "elevator=noop" inside a VM and disable as
most scheduling in the guest as possible
elevator= is for disk I/O scheduling, it has nothing whatsoever to do
with process scheduling.
Now consider that you are effectively hiding the physical CPU layout behind a hypervisor that applies it's
smoke-and-mirrors and makes it even harder for the guest kernel to do something reasonably sensible - so you get
another 20%+ overhead on top from the extra cache misses, extra context switching overheads
what the hell - the guest does not need to do "something reasonably sensible"
if you are doing things right because only the hypervisor can schedule hardware
ressources and with enough RAM on the host there is a lot to optimize
No - there is a lot to not get in the way of and make it worse.
So much experience, so little understanding...
oh yeah that is why i disable most subsystems in the guest-kernel since years
which produce unneeded overheads in a guest and not implemented in a dedicated
hypervisor running on bare-metal or if present optimized for virtualization
and nothing else because the VMkernel has only one job instead implement
a beast adressing any sort of workloads with overheads
Custom kernels in a production environment? Well done. You also need to
read up on what the hypervisor actually does under the hood.
Gordan
--
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org