Hi there, I'm doing experiments with (modified*) software iSCSI over a link with an emulated Round-Trip Time (RTT) of 100 ms by netem. For example, when I set the send buffer size to 128 KB, i could get a throughput up to 43 Mbps, which seems to be impossible as the (buffer size) / RTT is only 10 Mbps. And When I set the send buffer size to 512 KB, i can get a throughput up to 60 Mbps, which also seems to be impossible as the (buffer size) / RTT is only 40 Mbps. I understand that when the buffer size is set to 128 KB, I actually got a buffer of 256 KB as the kernel doubles the buffer size. I also understand that half the doubled buffer size is used for meta data instead of the actual data to be transferred. So basically the effective buffer sizes for the two examples are just 128 KB and 512 KB respectively. So I was confused because, theoretically, send buffers of 128 KB and 512 KB should achieve no more than 10 Mbps and 40 Mbps respectively but I was able to achieve way much more than the theoretical limit. So I was wondering is there any chance the send buffer can be "overused"? or there is some other mechanism inside TCP is doing some optimization? * the modification is to disable "TCP_NODELAY" , enable "use_clustering" for SCSI, and set different send buffer sizes for the TCP socket buffer. Any idea will be highly appreciated. Thanks a lot! Jack -- To unsubscribe from this list: send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html