Hi all- I've written a tiny program using raw sockets to perform a reduced set of the functions of tcpdump for our local network. We want it to be able to handle collection of traffic at wire speed over a gigabit ethernet link. To test this functionality, I extended the program to write ethernet packets over a raw socket, which I figured would be able to transmit at high rates. Unfortunately, I'm only getting a few megabits per second! I was hoping some gurus out there could either point me to more documentation on raw sockets under Linux, especially concerning their use at user-level with kernel 2.2.18. The weird behavior I see is twofold; first, the initial run of the program will achieve an order of magnitude lower bandwidth than future runs (some weird caching?). Second, the bandwidth the program acquires varies depending on the number of packets sent; performance increases up to about 15Mbps for counts up to 45 packets, then suddenly the performance drops to about 2Mbps for most runs, with about 1 in 10 randomly getting the larger values. The machines are otherwise unloaded. In more detail, my methodology is pretty simple: open a raw socket do ioctl to get interface index from name "eth0" bind the socket to the interface get timestamp (rdtsc) loop { send a maximum sized packet } get timestamp calculate Mbps from timestamps. *Any* insight would be greatly appreciated. I would hate to have to start sending UDP packets to get better performance ;( Thanks, -Eric -------------------------------------------- Eric H. Weigle CCS-1, RADIANT team ehw@lanl.gov Los Alamos National Lab (505) 665-4937 http://home.lanl.gov/ehw/ -------------------------------------------- - : send the line "unsubscribe linux-net" in the body of a message to majordomo@vger.kernel.org