RE: e1000 softirq load balancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It differs for each driver. Generally, for intel drivers, the driver version says whether its NAPI-mode or not.
We can know this by command "modinfo e1000 |grep NAPI" .
Or else, 'ethtool -i <interface name>' (where interface-name, should be intel-device interface-name like: eth0 or eth1..).

Where as for Broadcom, the driver version, doesn't say, whether its NAPI-mode or not. I don’t' know how to identify this for Broadcom?

Regards,
Madhukar.

-----Original Message-----
From: yong xue [mailto:ultraice@xxxxxxxxx] 
Sent: Wednesday, October 15, 2008 9:09 PM
To: Mythri, Madhukar [NETPWR/EMBED/HYDE]
Cc: porterde@xxxxxxxxxxxxx; linux-net@xxxxxxxxxxxxxxx
Subject: Re: e1000 softirq load balancing

with a running system, can you please tell me that how to check what the driver model is, NAPI or interrupt?

2008/10/15, Madhukar.Mythri@xxxxxxxxxxx <Madhukar.Mythri@xxxxxxxxxxx>:
>
> First make-sure whether you have one dedicated interrupt-line for each 
> network card.
> You got 100% CPU utilization with NAPI Disabled or Enabled ?
>
> With NAPI enabled, and assign each interrupt-line to each CPU. And 
> check whether the interrupts are distributed as per  you assigned, by 
> seeing "cat /proc/interrupts" and by pumping traffic.
> And also check how the interrupts balanced between CPU's.
>
> As per my observation, when NAPI enabled, the interrupt-line cannot 
> assign to each CPU. In NAPI-mode for whichever the CPU  got the first 
> interrupt, that CPU only carried out in poll-mode for rest of the 
> loaded packets(H/W buffers).
>
> So, with NAPI enabled, if interrupts are not distributed on assigned 
> CPU's and one CPU got loaded more, then try by Disabling NAPI, and use 
> 'irqbalance' utility to load-balance the interrupts between all CPU's.
> (http://www.irqbalance.org/).
>
> Regards,
> Madhukar.
>
> -----Original Message-----
> From: Don Porter [mailto:porterde@xxxxxxxxxxxxx]
> Sent: Wednesday, October 15, 2008 6:23 PM
> To: Mythri, Madhukar [NETPWR/EMBED/HYDE]
> Cc: linux-net@xxxxxxxxxxxxxxx
> Subject: Re: e1000 softirq load balancing
>
> I believe I have 4 interrupt lines, but I will have to double-check.
>
> I have successfully used /proc/irq/####/smp_affinity to assign the 
> interrupts to 4 different CPUs.  The problem is in the softirq portion 
> of the interrupt handling, where CPU usage indicates that they are all 
> being funneled back to a single CPU.
>
> Does that make sense?  I feel like one ought to be able to have 4 
> softirq daemons servicing incoming packets, not just one.
>
> Thanks,
> Don
>
> Madhukar.Mythri@xxxxxxxxxxx wrote:
> > You are saying that, 4 Intel  82571EB Gb NICs (2 pci cards x 2
> > NICs/chip) using the e1000 driver.
> > So, do you have 4 interrupt lines or only 2-lines?
> >
> > Based on this, if you have 4 interrupt lines, then you can assign 
> > each
>
> > interrupt-line to each CPU-core, with NAPI enabled(for good 
> > performance).
> >
> > Regards,
> > Madhukar.
> > -----Original Message-----
> > From: linux-net-owner@xxxxxxxxxxxxxxx 
> > [mailto:linux-net-owner@xxxxxxxxxxxxxxx] On Behalf Of Don Porter
> > Sent: Wednesday, October 15, 2008 12:36 AM
> > To: linux-net@xxxxxxxxxxxxxxx
> > Subject: e1000 softirq load balancing
> >
> > Hi,
> >
> > Background:
> >
> > I have a 16 core x86_64 machine (4 chips x 4 cores/chip) that has 4 
> > Intel 82571EB Gb NICs (2 pci cards x 2 NICs/chip) using the e1000 
> > driver.
> >
> > I have a simple client/server micro-benchmark that pounds a server 
> > on each NIC with requests to measure peak throughput.  I am running 
> > Ubuntu 8.04.1, kernel version 2.6.24.
> >
> > Problem:
> >
> > What I am observing is that a single ksoftirqd thread is becoming a 
> > bottleneck for the system.
> > More specifically, one cpu runs ksoftirqd at 100% cpu utilization, 
> > while
> > 4 cpus each run their servers at about 25%.  I carefully used
> > sched_setaffinity() to map server threads to cpus and 
> > /proc/irq/<device>/smp_affinity to map hardware interrupts to cpus 
> > such that there should be exactly 1 cpu per server thread and 1 cpu 
> > for servicing hardware interrupts per device.
> >
> > I can observe (via /proc/interrupts) that the interrupts are being 
> > distributed properly, but despite this I only see 1 or 2 ksoftirqd 
> > running, and the server daemons bottlenecked behind them.  (This is 
> > with NAPI disabled.  With NAPI enabled, I can't get even 2 ksoftirqd 
> > threads to run).  I have tried varous permutations such as assigning 
> > each hardware interrupt to a different physical chip.
> >
> > Desired Result:
> >
> > It seems to me that with 4 independent NICs and plenty of CPUs to 
> > spare, I ought to be able to assign one softirq daemon to each NIC 
> > rather than funnelling all of the traffic through 1 or 2.
> >
> > Any advice on this issue is greatly appreciated.
> >
> > Best regards,
> > Don Porter
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-net"
> > in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-net" 
> in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
>


--
Best Regards,

薛 勇

QQ:312200

e-mail:ultraice@xxxxxxxxx
MSN:it@xxxxxxxxxxxxxxxxx
��.n��������+%������w��{.n�����{��w��)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥


[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux