On Tue, Apr 15, 2008 at 09:06:44PM +0300, Anton Titov wrote: > I use Linux for serving a huge amount of static web on few servers. When > network traffic goes above 2Gbit/sec ksoftirqd/5 (not every time 5, but > every time just one) starts using exactly 100% CPU time and packet > packet loss starts preventing traffic from going up. When the network > traffic is lower than 1.9Gbit ksoftirqds use 0% CPU according to top. > > Uplink is 6 gigabit Intel cards bonded together using 802.3ad algorithm > with xmit_hash_policy set to layer3+4. On the other side is Cisco 2960 > switch. Machine is with two quad core Intel Xeons @2.33GHz. > > Here goes a screen snapshot of "top" command. The described behavior > have nothing to do with 13% io-wait. It happens even if it is 0% > io-wait. > http://www.titov.net/misc/top-snap.png > > kernel configuration: > http://www.titov.net/misc/config.gz > > /proc/interrupts, lspci, dmesg (nothing intresting there), ifconfig, > uname -a: > http://www.titov.net/misc/misc.txt.gz > > Is it a Linux bug or some hardware limitation? possibly some missing parameters when loading your e1000 drivers. e1000 NICs support interrupt rate limitation, which proves very efficient in cases such as yours. I'm used to limit them to about 5k ints/s. Do a "modinfo e1000" to get the parameter name, I don't have it quite right in mind. Also, I've CCed linux-net. Regards, Willy -- To unsubscribe from this list: send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html