From: John Heffner <jheffner@xxxxxxx> Date: Thu, 1 Sep 2005 22:51:48 -0400 > I have an idea why this is going on. Packets are pre-allocated by the > driver to be a max packet size, so when you send small packets, it > wastes a lot of memory. Currently Linux uses the packets at the > beginning of a connection to make a guess at how best to advertise its > window so as not to overflow the socket's memory bounds. Since you > start out with big segments then go to small ones, this is defeating > that mechanism. It's actually documented in the comments in > tcp_input.c. :) > > * The scheme does not work when sender sends good segments opening > * window and then starts to feed us spagetti. But it should work > * in common situations. Otherwise, we have to rely on queue collapsing. That's a strong possibility, good catch John. Although, I'm still not ruling out some box in the middle even though I consider it less likely than your theory. So you're suggesting that tcp_prune_queue() should do the: if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf) tcp_clamp_window(sk, tp); check after attempting to collapse the queue. But, that window clamping should fix the problem, as we recalculate the window to advertise. - : send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html