On Wed, 15 Nov 2006 00:15:47 +0200 (IST) eli@xxxxxxxxxxxxxxxxxx wrote: > Hi, > I am running a client/server test app over IPOIB in which the client sends > a certain amount of data to the server. When the transmittion ends, the > server prints the bandwidth and how much data it received. I can see that > the server reports it received about 60% that the client sent. However, > when I look at the server's interface counters before and after the > transmittion, I see that it actually received all the data that the client > sent. This leads me to suspect that the networking layer somehow dropped > some of the data. One thing to not - the CPU is 100% busy at the receiver. > Could this be the reason (the machine I am using is 2 dual cores - 4 > CPUs). If receiver application can't keep up UDP drops packets. The counter receive buffer errors (UDP_MIB_RCVBUFERRORS) is incremented. Don't expect flow control or reliable delivery; it's a datagram service! > The secod question is how do I make the interrupts be srviced by all CPUs? > I tried through the procfs as described by IRQ-affinity.txt but I can set > the mask to 0F bu then I read back and see it is indeed 0f but after a few > seconds I see it back to 02 (which means only CPU1). Most likely, the user level irq balance daemon (irqbalanced) is adjusting it? > > One more thing - the device I am using is capable of generating MSIX > interrupts. > Look at device capabilities with: lspci -vv -- Stephen Hemminger <shemminger@xxxxxxxx> - To unsubscribe from this list: send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html