Re: socket optimizations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At 12:48 PM 2/14/03 +0000, Bryan K. wrote:
I'm writing a network application which needs to transfer large amounts of data over ethernet. My question is, what is the optimal size of data buffer which is passed to write(socket, ...)?
Experiment.

Look at your variables. What is the size of the socket buffers? How many sockets need to be open at any given time? Are you using select(2) or poll(2), or are you trying to use blocking I/O? Is the data in memory, or are you having to read disk all the time? And if it is in "memory" is it possible that you will generate swapping events in the application?

I once wrote a program just for the hell of it that generated data (using a stupid and fast data generator) and shipped it all over a 10-base T network to a second computer that just read the stuff and dumped the data on the floor.. I fiddled with socket size and buffer size in multiple five-minute trials. I could saturate that 10-base T network pretty easily no matter what the settings, which indicated to me that differences were not major. So I continued to fiddle until the system loading was as low as I could get it. As I recall, I ended up using 32,768-byte socket transmit and receive buffers, 4,096-byte buffers to read(2) and write(2), and used select(2) to check for data available and buffer space available. Larger socket buffers and transfer buffers just didn't seem to have any measurable effect on the matter.

Smaller socket buffers did affect throughput a little, but not much. Larger socket buffers bought me nothing.

My experiments were done on a 2.0.34 kernel, using a Slackware distribution. The computers were a Pentium 166 and a Pentium II 233. I don't recall the exact model numbers for the ethernet cards, but one was a SMC 16-bit ISA card and the other was a 3COM PCI one. I never repeated the experiment using 100-base T equipment.

I tried introducing fixed delays in the program to decrease loading, but the extra work proved not to be beneficial. It was interesting, though, to use nice(2) and see that throughput was not affected, but response time for other parts of the system improved markedly.

After writing the book _Linux IP Stacks Commentary_ I now believe I understand why I got the results I did, but I'll leave the joy of those discoveries to you.

Remember, I've not re-run these experiments on modern kernels, nor have I studied the kernels closely. I recommend you do your own experiments (get computers out of the trash can if you don't have enough to do these sorts of things) and learn what you can. Especially run experiments that closely mimic what you are trying to do in "real life."

Satch

-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux