We have IBM power 6 CPUs at out benchmark center. we are using 20 core cpu boxes. we are using intel e1000 network driver based card. we have tried following two parameters. RxAbsIntDelay=0 InterruptThrottleRate=0 i will try to find out more about multiple queues ... but less knowledge on QDISC. __ tharindu On Wed, Jun 17, 2009 at 10:53 PM, Michael Blizek<michi1@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote: > Hi! > > On 14:10 Wed 17 Jun , Tharindu Rukshan Bamunuarachchi wrote: >> with Oprofile, due to performance hit, we did not see the issue. at >> least we could not pump desired transaction rate. > > OK. It would still be interesting to know which CPUs you have. Maybe somebody > might be able to reproduce this on a system with more CPUs... > >> however, i could find workaround for 100% issue. >> >> i have installed two 1G network cards and used bondding with >> balance-rr (round robin). >> >> now i do not see 100% utilization of ksoftirqd thread. >> >> BTW, i heard -rt patch divide ksoftirqd into two pieces for rx and tx. >> have you guys -rt patch ? > > I do not use the -rt patch. But the problem going away with 2 network cards > seems interesting. The network folks have found and resolved a similar > problem. The cause was a lock contention in the qdisc and they have > implemented support for "multiple hardware queues". I think your network > card supports this feature, but you might have to enable it in the kernel > config or somewhere. > > -Michi > -- > programing a layer 3+4 network protocol for mesh networks > see http://michaelblizek.twilightparadox.com > > -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ