Re: [Last-Call] [EXTERNAL] Re: [tcpm] Last Call:<draft-ietf-tcpm-rack-13.txt>(TheRACK-TLPlossdetectionalgorithmforTCP) toProposed Standard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neal,

On Thu, 17 Dec 2020, Neal Cardwell wrote:

On Thu, Dec 17, 2020 at 2:36 PM Yuchung Cheng <ycheng@xxxxxxxxxx> wrote:
      How about
 
"9.3.  Interaction with congestion control

RACK-TLP intentionally decouples loss detection ... 
As mentioned in Figure 1 caption, RFC5681 mandates a principle that
Loss in two successive windows of data, or the loss of a
retransmission, should be taken as two indications of congestion, and
therefore reacted separately. However implementation of RFC6675 pipe
algorithm may not directly account for this newly detected congestion
events properly. PRR [RFCxxxx] is RECOMMENDED for the specificcongestion control
actions taken upon the losses detected by RACK-TLP"

To Makku's request for "what's the justification to enter fast recovery". Consider this
example w/o RACK-TLP

T0: Send 100 segments but application-limited. All are lost.
T-2RTT: app writes so another 3 segments are sent. Made to the destination and
triggered 3 DUPACKs
T-3RTT: 3 DUPACK arrives. start fast recovery and subsequent cc reactions to burst ~50
packets with Reno 

In this case any ACK occured before RTO is (generally) considered clock-acked, and
how I understand Van's initial design.  This behavior existed decades before RACK-TLP.
RACK-TLP essentially changes the "3-segments by app" to "1-segment by tcp". 


To amplify Yuchung's nice example, and try to restate it in more general terms:

As far as I'm aware, TLP probes do not introduce materially new congestion control behaviors,
beyond what can happen with [RFC5681] and [RFC6675].

Please see my previous reply. No new congestion control behaviors if I understand what you mean by that but there are more scenarios in which loss of a full window is detected without RTO.

This is because a TLP probe serves much the same probing function that an application write()
could have served, if the application had been so lucky as to time its write at the
appropriate time (i.e. delay the write() of the last segment in the flight to be 2*SRTT after
the preceding segment).

Thus, for any scenario that one constructs where a TLP probe initiates a fast recovery
episode, it is possible to construct, for a TCP implementation implementing just [RFC5681]
and [RFC6675], an application write() pattern where the on-the-wire behavior is nearly
identical to the TLP-initiated recovery.

Except when the sender is congestion window limited. Receive window limited is another case where application cannot write new data that would get transmitted but a TLP probe could send the highest-sequence segment?

The draft is a bit unclear on this but I believe this was the intent (draft says: "If such a segment is not available, ..." but in receive window limited case such a new segment would be available but it cannot be transmitted)

For folks concerned about a scenario with FlightSize of 100 segments, and a sender entering
fast recovery and blasting 50 segments, the same behavior could happen with the existing RFCs
[RFC5681] and [RFC6675], which allow that. And implementers who are worried about such bursts
(a very reasonable thing to be worried about) should probably be implementing pacing and PRR,
or something like that. But that was already true before RACK-TLP.

I would rather say it is a bug in RFC6675. Nobody paid sufficient attention to such a scenario.

Yes, PRR handles it fine, practically directing the sender to slow start and start ack clock by limiting the number segments sent per incoming Ack. Pure pacing over one RTT would not be enough in my opinion because it still sends too many segments per RTT.

Best regards,

/Markku

best,
neal


-- 
last-call mailing list
last-call@xxxxxxxx
https://www.ietf.org/mailman/listinfo/last-call

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux