So, I see the PATCH discussion has reached a compromise, but would like
the protocol people to reflect on what TFRC really say about the
nofeedback timer values:
I can see two things I'd like to understand:
1) What should be the INITIAL value for the timer? RFC 3448 says 2
seconds (section 4.2), but why was this 2, rather than 3, seconds?
- one rationale is that Linux already uses 2 secs for intial RTO in
Linux, but RFC 1122 Section 4.2.3.1 (and RFC2998) says it should be 3
seconds. 3 seconds provides more headroom for paths that may have
variable properties, especially at start-up (a "classic" example is ISDN).
- QUESTION: Should TFRC agree with TCP or the Linux value?
2) Should TFRC define a MINIMUM value for the timer?
- Some arguments for a small timer value include faster congestion
responses to loss, lower cost (if processing can be co-incident with
other protocol activity - but Mark suggested we only need to check in
the send-code anyway?)
- Some arguments for a larger timer include more tollerence to sudden
changes in path characteristics (TCP uses a min RTO of 1 sec, RFC2988)
e.g. mobility changes or routing changes, and lower load on processing,
especially at higher bit rates.
Any thoughts on these issues?
Gorry
P.S. the starting quote was:
"The TFRC nofeedback timer normally expires after the maximum of 4
RTTs and twice the current send interval (RFC 3448, 4.3). On LANs
with a small RTT this can mean a high processing load and reduced
performance, since then the nofeedback timer is triggered very
frequently. As a result, the sending rate quickly converges towards
zero."