Re: e1000 softirq load balancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sure. I tried (1, 2, 4, 10) and (1, 4, 10, 20) (and a few others), echoing each value into the appropriate /proc/irq/XXX/smp_affinity field.

Thanks,
Don

Aaron Porter wrote:
On Tue, Oct 14, 2008 at 12:05 PM, Don Porter <porterde@xxxxxxxxxxxxx> wrote:
What I am observing is that a single ksoftirqd thread is becoming a
bottleneck for the system.
More specifically, one cpu runs ksoftirqd at 100% cpu utilization, while 4
cpus each run their servers at about 25%.  I carefully used
sched_setaffinity() to map server threads to cpus and
/proc/irq/<device>/smp_affinity to map hardware interrupts to cpus such that
there should be exactly 1 cpu per server thread and 1 cpu for servicing
hardware interrupts per device.

Could you provide some more detail about the smp_affinity mask you're
setting? I can push a pretty constant 5.9gbps out of a 6gb bond device
(2 x forcedeth, 4 x e1000) using the 1 IRQ/CPU method.

--
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux