Hi, I have a requirement to handle TCP connections directly, bypassing the Linux stack. To do this, I have blocked all packets to the one port I'm using with iptables and send/receive on a PF_PACKET socket. This works well, but I am currently having a performance issue - despite the modest hardware, I had expected to achieve greater than the 4000 packets a second that I am currently achieving (40 byte packets.) A similar program running on NT achieves 25kpps. The hardware is an ordinary 700MHz Athlon system with a 3c905C ethernet card. It's doing nothing else, and is on a switched network with a low level of broadcasts. I'm currently using a RedHat 2.4.18-19.8.0 kernel. For measuring the packet rate, the code looks broadly like this: if((s=socket(PF_PACKET, SOCK_DGRAM, htons(ETH_P_IP)))<0) { [...] len=recvfrom(s,&buf,MAX_LEN,0,(struct sockaddr *)&saddr_in,&saddr_size); [...] while(alrm) { id++; n=sendto(s,iph_out,40+data_len_out,0,(struct sockaddr *)&saddr_in,saddr_size); } (data_len_out is 0.) Is 4000 packets a second reasonable on hardware like this? I found an old message in an archive that considered 20kpps for sendto() on a raw socket was slow. I had certainly been expecting more. Is SOCK_DGRAM significantly slower than SOCK_RAW for PF_PACKET? Would an upgrade to a 2.5 kernel help? Where is the bottleneck likely to be? I'd rather avoid a kernel module if possible, but if it's the only way to get the performance I need, I'll consider it. So, if anyone has any ideas, they'd be much appreciated. Many thanks, Richard - : send the line "unsubscribe linux-net" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html