[Last-Call] Artart last call review of draft-ietf-tsvwg-ecn-l4s-id-27

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Reviewer: Bernard Aboba
Review result: On the Right Track

Here are my review comments.  I believe this is quite an important document, so
that making the reasoning as clear as possible is important.  Unfortunately,
the writing and overall organization makes the document hard to follow. If the
authors are open to it, I'd be willing to invest more time to help get it into
shape.

Overall Comments

Abstract

Since this is an Experimental document, I was expecting the Abstract and
perhaps the Introduction to refer briefly to the considerations covered in
Section 7, (such as potential experiments and open issues).

Organization and inter-relation between Sections

The document has organizational issues which make it more difficult to read.

I think that Section 1 should provide an overview of the specification, helping
the reader navigate it.

Section 1.1 refers to definitions in Section 1.2 so I'd suggest that that
Section 1.2 might be come first.

Section 1.3 provides basic information on Scope and the relationship of this
document to other documents.  I was therefore expecting Section 7 to include
questions on some of the related documents (e.g. how L4S might be tested along
with RTP).

I wonder whether much of Section 2 could be combined with Appendix B, with the
remainder moved into the Introduction, which might also refer to Appendix B.

Section 4.2

   RTP over UDP:  A prerequisite for scalable congestion control is for
      both (all) ends of one media-level hop to signal ECN
      support [RFC6679] and use the new generic RTCP feedback format of
      [RFC8888].  The presence of ECT(1) implies that both (all) ends of
      that media-level hop support ECN.  However, the converse does not
      apply.  So each end of a media-level hop can independently choose
      not to use a scalable congestion control, even if both ends
      support ECN.

[BA] The document earlier refers to an L4S modified version of SCreAM, but does
not provide a reference.  Since RFC 8888 is not deployed today, this paragraph
(and Section 7) leaves me somewhat unclear on the plan to evaluate L4S impact
on RTP. Or is the focus on experimentation with RTP over QUIC (e.g.
draft-ietf-avtcore-rtp-over-quic)?

   For instance, for DCTCP [RFC8257], TCP Prague
   [I-D.briscoe-iccrg-prague-congestion-control], [PragueLinux] and the
   L4S variant of SCReAM [RFC8298], the average recovery time is always
   half a round trip (or half a reference round trip), whatever the flow
   rate.

[BA] I'm not sure that an L4S variant of SCReAM could really be considered
"scalable" where simulcast or scalable video coding was being sent. In these
scenarios, adding a layer causes a multiplicative increase in bandwidth, so
that "probing" (e.g. stuffing the channel with RTX probes or FEC) is often a
necessary precursor to make it possible to determine whether adding layers is
actually feasible.

   As with all transport behaviours, a detailed specification (probably
   an experimental RFC) is expected for each congestion control,
   following the guidelines for specifying new congestion control
   algorithms in [RFC5033].  In addition it is expected to document
   these L4S-specific matters, specifically the timescale over which the
   proportionality is averaged, and control of burstiness.  The recovery
   time requirement above is worded as a 'SHOULD' rather than a 'MUST'
   to allow reasonable flexibility for such implementations.

[BA] Is the L4S variant of SCReaM one of the detailed specifications that is
going to be needed? From the text I wasn't sure whether this was documented
work-in-progress or a future work item.

Section 4.3.1

      To summarize, the coexistence problem is confined to cases of
      imperfect flow isolation in an FQ, or in potential cases where a
      Classic ECN AQM has been deployed in a shared queue (see the L4S
      operational guidance [I-D.ietf-tsvwg-l4sops] for further details
      including recent surveys attempting to quantify prevalence).
      Further, if one of these cases does occur, the coexistence problem
      does not arise unless sources of Classic and L4S flows are
      simultaneously sharing the same bottleneck queue (e.g. different
      applications in the same household) and flows of each type have to
      be large enough to coincide for long enough for any throughput
      imbalance to have developed.

[BA] This seems to me to be one of the key questions that could limit the
"incremental deployment benefit".  A reference to the discussion in Section 7
might be appropriate here.

5.4.1.1.1.  'Safe' Unresponsive Traffic

   The above section requires unresponsive traffic to be 'safe' to mix
   with L4S traffic.  Ideally this means that the sender never sends any
   sequence of packets at a rate that exceeds the available capacity of
   the bottleneck link.  However, typically an unresponsive transport
   does not even know the bottleneck capacity of the path, let alone its
   available capacity.  Nonetheless, an application can be considered
   safe enough if it paces packets out (not necessarily completely
   regularly) such that its maximum instantaneous rate from packet to
   packet stays well below a typical broadband access rate.

[BA] The problem with video traffic is that the encoder typically
targets an "average bitrate" resulting in a keyframe with a
bitrate that is above the bottleneck bandwidth and delta frames
that are below it.  Since the "average rate" may not be
resettable before sending another keyframe, video has limited
ability to respond to congestion other than perhaps by dropping
simulcast and SVC layers. Does this mean that a video is
"Unsafe Unresponsive Traffic"?

NITs

Abstract

   The L4S identifier defined in this document distinguishes L4S from
   'Classic' (e.g. TCP-Reno-friendly) traffic.  It gives an incremental
   migration path so that suitably modified network bottlenecks can
   distinguish and isolate existing traffic that still follows the
   Classic behaviour, to prevent it degrading the low queuing delay and
   low loss of L4S traffic.  This specification defines the rules that

[BA] Might be clear to say "This allows suitably modified network..."

The words "incremental migration path" suggest that there deployment of
L4S-capable network devices and endpoints provides incremental benefit.
That is, once new network devices are put in place (e.g. by replacing
a last-mile router), devices that are upgraded to support L4S will
see benefits, even if other legacy devices are not ugpraded.

If this is the point you are looking to make, you might want to clarify
the language.

   L4S transports and network elements need to follow with the intention
   that L4S flows neither harm each other's performance nor that of
   Classic traffic.  Examples of new active queue management (AQM)
   marking algorithms and examples of new transports (whether TCP-like
   or real-time) are specified separately.

[BA] Don't understand "need to follow with the intention". Is this
stating a design principle, or is does it represent deployment
guidance?

The sentence "L4S flows neither harm each other's performance nor that
of classic traffic" might be better placed after the first sentence
in the second paragraph, since it relates in part to the "incremental
deployment benefit" argument.

Section 1. Introduction

   This specification defines the protocol to be used for a new network
   service called low latency, low loss and scalable throughput (L4S).
   L4S uses an Explicit Congestion Notification (ECN) scheme at the IP
   layer with the same set of codepoint transitions as the original (or
   'Classic') Explicit Congestion Notification (ECN [RFC3168]).
   RFC 3168 required an ECN mark to be equivalent to a drop, both when
   applied in the network and when responded to by a transport.  Unlike
   Classic ECN marking, the network applies L4S marking more immediately
   and more aggressively than drop, and the transport response to each

   [BA] Not sure what "aggressively" means here. In general, marking
   traffic seems like a less aggressive action than dropping it. Do
   you mean "more frequently"?

   Also, it's a bit of a run-on sentence, so I'd break it up:

   "than drop.  The transport response to each"

   mark is reduced and smoothed relative to that for drop.  The two
   changes counterbalance each other so that the throughput of an L4S
   flow will be roughly the same as a comparable non-L4S flow under the
   same conditions.  Nonetheless, the much more frequent ECN control
   signals and the finer responses to these signals result in very low
   queuing delay without compromising link utilization, and this low
   delay can be maintained during high load.  For instance, queuing
   delay under heavy and highly varying load with the example DCTCP/
   DualQ solution cited below on a DSL or Ethernet link is sub-
   millisecond on average and roughly 1 to 2 milliseconds at the 99th
   percentile without losing link utilization [DualPI2Linux], [DCttH19].

   [BA] I'd delete "cited below" since you provide the citation at
   the end of the sentence.

   Note that the inherent queuing delay while waiting to acquire a
   discontinuous medium such as WiFi has to be minimized in its own
   right, so it would be additional to the above (see section 6.3 of the
   L4S architecture [I-D.ietf-tsvwg-l4s-arch]).

   [BA] Not sure what "discontinuous medium" means. Do you mean
   wireless?  Also "WiFi" is a colloquialism; the actual standard
   is IEEE 802.11 (WiFi Alliance is an industry organization).
   Might reword this as follows:

   "Note that the changes proposed here do not lessen delays from
    accessing the medium (such as is experienced in [IEEE-802.11]).
    For discussion, see Section 6.3 of the L4S architecture
    [I-D.ietf-tsvwg-l4s-arch]."

   L4S is not only for elastic (TCP-like) traffic - there are scalable
   congestion controls for real-time media, such as the L4S variant of
   the SCReAM [RFC8298] real-time media congestion avoidance technique
   (RMCAT).  The factor that distinguishes L4S from Classic traffic is

   [BA] Is there a document that defines the L4S variant of SCReAM?

   its behaviour in response to congestion.  The transport wire
   protocol, e.g. TCP, QUIC, SCTP, DCCP, RTP/RTCP, is orthogonal (and
   therefore not suitable for distinguishing L4S from Classic packets).

   The L4S identifier defined in this document is the key piece that
   distinguishes L4S from 'Classic' (e.g. Reno-friendly) traffic.  It
   gives an incremental migration path so that suitably modified network
   bottlenecks can distinguish and isolate existing Classic traffic from
   L4S traffic to prevent the former from degrading the very low delay
   and loss of the new scalable transports, without harming Classic
   performance at these bottlenecks.  Initial implementation of the
   separate parts of the system has been motivated by the performance
   benefits.

[BA] I think you are making an "incremental benefit" argument here,
but it might be made more explicit:

"  The L4S identifier defined in this document distinguishes L4S from
   'Classic' (e.g. Reno-friendly) traffic. This allows suitably
   modified network bottlenecks to distinguish and isolate existing
   Classic traffic from L4S traffic, preventing the former from
   degrading the very low delay and loss of the new scalable
   transports, without harming Classic performance. As a result,
   deployment of L4S in network bottlenecks provides incremental
   benefits to endpoints whose transports support L4S."

Section 1.1 1.1.  Latency, Loss and Scaling Problems

   Latency is becoming the critical performance factor for many (most?)
   applications on the public Internet, e.g. interactive Web, Web
   services, voice, conversational video, interactive video, interactive
   remote presence, instant messaging, online gaming, remote desktop,
   cloud-based applications, and video-assisted remote control of
   machinery and industrial processes.  In the 'developed' world,
   further increases in access network bit-rate offer diminishing
   returns, whereas latency is still a multi-faceted problem.  In the
   last decade or so, much has been done to reduce propagation time by
   placing caches or servers closer to users.  However, queuing remains
   a major intermittent component of latency.

[BA] Since this paragraph provides context for the work, you might
consider placing it earlier (in Section 1 as well as potentially in
the Abstract).

Might modify this as follows:

"
   Latency is the critical performance factor for many Internet
   applications, including web services, voice, realtime video,
   remote presence, instant messaging, online gaming, remote
   desktop, cloud services, and remote control of machinery and
   industrial processes. In these applications, increases in access
   network bitrate may offer diminishing returns. As a result,
   much has been done to reduce delays by placing caches or
   servers closer to users. However, queuing remains a major
   contributor to latency."

   The Diffserv architecture provides Expedited Forwarding [RFC3246], so
   that low latency traffic can jump the queue of other traffic.  If
   growth in high-throughput latency-sensitive applications continues,
   periods with solely latency-sensitive traffic will become
   increasingly common on links where traffic aggregation is low.  For
   instance, on the access links dedicated to individual sites (homes,
   small enterprises or mobile devices).  These links also tend to
   become the path bottleneck under load.  During these periods, if all
   the traffic were marked for the same treatment, at these bottlenecks
   Diffserv would make no difference.  Instead, it becomes imperative to
   remove the underlying causes of any unnecessary delay.

[BA] This paragraph is hard to follow. You might consider rewriting it as
follows:

   "The Diffserv architecture provides Expedited Forwarding [RFC3246], to
   enable low latency traffic to jump the queue of other traffic. However,
   the latency-sensitive applications are growing in number along
   with the fraction of latency-sensitive traffic. On bottleneck links where
   traffic aggregation is low (such as links to homes, small enterprises or
   mobile devices), if all traffic is marked for the same treatment, Diffserv
   will not make a difference. Instead, it is necessary to remove unnecessary
   delay."

  long enough for the queue to fill the buffer, making every packet in
   other flows sharing the buffer sit through the queue.

   [BA] "sit through" -> "share"

   Active queue management (AQM) was originally developed to solve this
   problem (and others).  Unlike Diffserv, which gives low latency to
   some traffic at the expense of others, AQM controls latency for _all_
   traffic in a class.  In general, AQM methods introduce an increasing
   level of discard from the buffer the longer the queue persists above
   a shallow threshold.  This gives sufficient signals to capacity-
   seeking (aka. greedy) flows to keep the buffer empty for its intended
   purpose: absorbing bursts.  However, RED [RFC2309] and other
   algorithms from the 1990s were sensitive to their configuration and
   hard to set correctly.  So, this form of AQM was not widely deployed.

   More recent state-of-the-art AQM methods, e.g. FQ-CoDel [RFC8290],
   PIE [RFC8033], Adaptive RED [ARED01], are easier to configure,
   because they define the queuing threshold in time not bytes, so it is
   invariant for different link rates.  However, no matter how good the
   AQM, the sawtoothing sending window of a Classic congestion control
   will either cause queuing delay to vary or cause the link to be
   underutilized.  Even with a perfectly tuned AQM, the additional
   queuing delay will be of the same order as the underlying speed-of-
   light delay across the network, thereby roughly doubling the total
   round-trip time.

[BA] Would suggest rewriting as follows:

"  More recent state-of-the-art AQM methods such as FQ-CoDel [RFC8290],
   PIE [RFC8033] and Adaptive RED [ARED01], are easier to configure,
   because they define the queuing threshold in time not bytes, providing
   link rate invariance.  However, AQM does not change the "sawtooth"
   sending behavior of Classic congestion control algorithms, which
   alternates between varying queuing delay and link underutilization.
   Even with a perfectly tuned AQM, the additional queuing delay will
   be of the same order as the underlying speed-of-light delay across
   the network, thereby roughly doubling the total round-trip time."

   If a sender's own behaviour is introducing queuing delay variation,
   no AQM in the network can 'un-vary' the delay without significantly
   compromising link utilization.  Even flow-queuing (e.g. [RFC8290]),
   which isolates one flow from another, cannot isolate a flow from the
   delay variations it inflicts on itself.  Therefore those applications
   that need to seek out high bandwidth but also need low latency will
   have to migrate to scalable congestion control.

[BA] I'd suggest you delete the last sentence, since the point is
elaborated on in more detail in the next paragraph.

   Altering host behaviour is not enough on its own though.  Even if
   hosts adopt low latency behaviour (scalable congestion controls),
   they need to be isolated from the behaviour of existing Classic
   congestion controls that induce large queue variations.  L4S enables
   that migration by providing latency isolation in the network and

[BA] "enables that migration" -> "motivates incremental deployment"

   distinguishing the two types of packets that need to be isolated: L4S
   and Classic.  L4S isolation can be achieved with a queue per flow
   (e.g. [RFC8290]) but a DualQ [I-D.ietf-tsvwg-aqm-dualq-coupled] is
   sufficient, and actually gives better tail latency.  Both approaches
   are addressed in this document.

   The DualQ solution was developed to make very low latency available
   without requiring per-flow queues at every bottleneck.  This was

[BA] "This was" -> "This was needed"

   Latency is not the only concern addressed by L4S: It was known when

   [BA] ":" -> "."

   explanation is summarised without the maths in Section 4 of the L4S

   [BA] "summarised without the maths" -> "summarized without the mathematics"

1.2.  Terminology

[BA] Since Section 1.1 refers to some of the Terminology defined in
this section, I'd consider placing this section before that one.

   Reno-friendly:  The subset of Classic traffic that is friendly to the
      standard Reno congestion control defined for TCP in [RFC5681].
      The TFRC spec. [RFC5348] indirectly implies that 'friendly' is

      [BA] "spec." -> "specification"

      defined as "generally within a factor of two of the sending rate
      of a TCP flow under the same conditions".  Reno-friendly is used
      here in place of 'TCP-friendly', given the latter has become
      imprecise, because the TCP protocol is now used with so many
      different congestion control behaviours, and Reno is used in non-

      [BA] "Reno is used" -> "Reno can be used"

4.  Transport Layer Behaviour (the 'Prague Requirements')

[BA] This section is empty and there are no previous references to Prague. So I
think you need to say a few words here to introduce the section.


-- 
last-call mailing list
last-call@xxxxxxxx
https://www.ietf.org/mailman/listinfo/last-call



[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux