with Oprofile, due to performance hit, we did not see the issue. at least we could not pump desired transaction rate. however, i could find workaround for 100% issue. i have installed two 1G network cards and used bondding with balance-rr (round robin). now i do not see 100% utilization of ksoftirqd thread. BTW, i heard -rt patch divide ksoftirqd into two pieces for rx and tx. have you guys -rt patch ? On Tue, Jun 16, 2009 at 11:39 PM, Michael Blizek<michi1@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote: > Hi! > > On 22:12 Tue 16 Jun , Tharindu Rukshan Bamunuarachchi wrote: >> hi All, >> >> recently we were developing high performance / low latency transaction >> processing system with Linux. (i.e. SuSE 11 but 2.6.29 vanilla kernel) >> >> we tried to send high volume of traffic, typically 20K messages per >> second. each message is about 400 bytes to 800 bytes. >> >> we saw ksoftirq thread is taking about 19 % CPU at 15K rate and at 20K >> rate it was about 100%. so we could not pump messages more than 20K. >> >> do you guys have any idea about ksoftirq behaviour, why is it taking 100% CPU. >> >> we have tried both 1G NIC and 10G NIC. we saw high % for both NICs ? > > What CPU(s) do you have? This sounds like cache line trashing or something > like this. You can try reproducing the behaviour with oprofile running (you > also have to enable compile with framepointers under kernel hacking for some > archs). Oprofile slows down the whole system and turning it on might make the > problem go "away". If the problem is reproduceable with oprofile you might see > a function which takes very long under load and is fast without load. > > -Michi > -- > programing a layer 3+4 network protocol for mesh networks > see http://michaelblizek.twilightparadox.com > > -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ