On 11/20/05, Unai Uribarri <unaiur@xxxxxxxxx> wrote: > I'm trying to run a gigabit linux-based bridge at full duplex, full > line rate. That means receiving and sending 3 Million Packets Per > Second: quite ambitious, I known. 3.125Mpps, actually, assuming all minimum length 40 byte packets. > My first problem is to get the appropiate hardware: I've evaluated > several PCI-X nics (Intel & Broadcom) but I can't achieve more than 800 > kpps. I've read that this is the PCI-X bus limit, so I'm going to buy a > pair of PCI Express x4 NICs. That would help. > Has anyone evaluated the SysKonnect SK-9E22 or Intel PRO/1000 PT Dual > Port cards? Is there other capable NICs? My experience with the e1000s is that they are very good, but I have not, as of yet, been able to get a full 1Gbit transfer rate out of them for any non-trivial length of time. This has much more to do with Linux than the card, but at the same time, I'm not sure the card can sustain Gbit speeds since I can't test it. I have no experience with the SysKonnect cards. > My second problem is to evaluate the packet drop rate. I've written an > user space program using PF_PACKET's mmap'ed receive ring that just > counts packets, but it can't even receive 800kpps. Again, in my experience, 800Kpps is about all you're going to get using Linux and GigE (and that's without iptables). I would try something more custom, or get yourself some 10GigE cards and try those. Really, at those speeds the system bus is going to hose you and it would need to be replaced with a switched backplane of some kind in order to achieve serious GigE+ performance over an extended period of time. If anyone knows that I'm wrong, I'd be interested to hear it, as well. -- Toby DiPasquale 0x636f6465736c696e67657240676d61696c2e636f6d - : send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html