Auke Kok wrote:
Juan Pablo Abuyeres wrote:
On Tue, 2006-05-02 at 08:43 -0700, Auke Kok wrote:
handled by whatever cpu is available, and the same goes for routing.
Make sure you run an irqbalance daemon to spread the rx interrupt
load across the cpu's if applicable.
Thank you.
Should IRQ's for a given network interface be distributed on all CPUs or
only be handled by one CPU? I've read a lot of stuff about kernel irq
balance (which just found out is now obsolete), smp_affinity and
irqbalance userland daemon, but I'm still confused about what should I
see in /proc/interrupts as an optimal behavior. Ar for now, I've only
been able to handle interrupts for a given NIC by one CPU.
optimally the irq of an interface should be tied to a single cpu, and
other interrupts of other nics should go to another, so that
consistenly every cpu is bound to a nic. The kernel balancer is
obsolete and performs pourly because it keeps moving irq's around
without a real reason. smp_affinity is nice but a one-shot tool, and
may achieve theoretical max performance but you lose the flexibility
of irqbalance (the userspace daemon) that can spread all interrupts
over cpu's and thus can handle sudden loads of hdd usage or other io
interrupt floods.
Agree, binding NICs to CPUs seems to hurt response and increase context
switching. You don't want a user process in CPU0 to be migrated to CPU1
because CPU0 is servicing the interrupt. You don't want the process to
wait for CPU0 to come back with it's cache changes either.
I have found that hyperthreading helps for routing, not as much as
multiple CPUs, but still useful.
--
bill davidsen <davidsen@xxxxxxx>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html