Re: Limited IOP/s on Dual Xeon KVM Host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 10.11.2012 17:00, schrieb Andrey Korolyov:
If you mean underlying node, it could be remote memory access or if you
are on a last gen xeon if you have dual io hubs, you could be hitting a
remote io hub for the network card.  I wouldn't think that would cause
such a big hit, but those are things to look into.


I'm on E5-Xeon. What means io hub?

QPI path length, in other terms, numa distance(hope Mark means the
same). Yes, it is impossible to have such degradation even in worst
case on two-head node. I assume two possible things - you have pinned
many processes on the core set which including default core for the
network card` irq, please check it via /proc/interrupts
The 10GBE card is alligned with all it's queues to one CPU. (use intel irq affinity script).

not really did pinning and qemu process losing ticks by switching
cores - it may be checked, say, by top and guest cpu bencmark. For the
network card, it may be generally recommended to move its irq affinity
to entire numa node to which it belongs.
Might be but i'm seeing that the 10GBE card is also slower than on other systems ;-(

Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux