On Thursday 12 April 2001 23:02, Andi Kleen wrote: > > While reading the source code for the TCP implementation in Linux > > (2.2.18) I noticed that delayed ACKs in Linux are implemented differently > > than in BSD. In BSD, a timer that goes off every 200 ms is used for > > sending those ACKs, which makes the delay a trifle hard to predict. Linux > > seems to set one timer per connection and ACK, and if the ACK can be > > piggybacked on a data packet, the timer is canceled. The timer interval > > seems to depend on the round-trip time of the connection. The algorithm > > also seems to use delayed ACKs only when the sender has gone out of slow > > start. Am I right? > > Out of the beginning of slow start. It's also cancelled when two rcvmss > sized packets arrive. Yes. Ok, so the actual time the ACKs are delayed are either a constant (HZ/2?), or 0. I was under the impression that the delay was dynamic. > > I would be interested to know if anyone had more information on the > > delayed ACK timeout calculation, the slow start optimization, and the > > rationale for implementing it this way instead of the BSD way. > > Main goal was to make slow lines like PPP over 28.000baud behave better, > where 200ms leads to some suboptimal behaviour. Also it was easy enough > with the linux TCP framework (I believe newer BSD are also moving to > similar algorithms using finegrained timers) The coarse grained BSD timers are a legacy from a time when computers were slow, so I guess the Linux way is better for modern PCs. /adam -- Adam Dunkels <adam@sics.se> http://www.sics.se/~adam - : send the line "unsubscribe linux-net" in the body of a message to majordomo@vger.kernel.org