Re: [aqm] Last Call: <draft-ietf-aqm-fq-codel-05.txt> (FlowQueue-Codel) to Experimental RFC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't even know where to start bob. This part of the language has
been in the draft for 2 years, and you are the only person to object
that I can recall.

It's an experimental RFC. By "safe" we mean that deploying it, within
the guidelines, won't break anything to any huge extent to enormous
benefits. "unsafe", for example, would be promoting use of dctcp while
it still responds incorrectly to packet loss.

Verses your decades long quest for better variable rate video, we've
had over a decade of the bufferbloat problem to deal with on all
traffic, particularly along the edge, and even after solutions started
to appear in mid 2012, we haven't made a real dent in what's deployed,
except for the small select group of devs, academics, ISPs, and
manufacturers willing to try something new.  I'd like to imagine
things are shifting to the left side of the green line here, but under
load, most users are still experiencing latency orders of magnitude in
excess of what can be achieved
http://www.dslreports.com/speedtest/results/bufferbloat?up=1

I've been testing the latest generation of wifi APs of late, and the
"best" of them, under load, in a single direction, has over 2 seconds
at the lower rates. Applying any of these algorithms to wifi is
proving hard, and it's where the bottlenecks are shifting to at least
in my world, where the default download speed is hovering at around
75mbit, and wifi starts breaking down long before that is hit.

...

I tore apart that HAS experiment you cited here:
https://lists.bufferbloat.net/pipermail/bloat/2016-February/007198.html
- where I was, at least, happy
to see fq_codel handle the onslought of dctcp traffic, gracefully. (It
makes me nervous to have such tcps loose on the internet where a
configuration mistake might send that at the wrong people. fq_codel,
"safe" - not, perhaps, optimal - in the face of dctcp.)

my key objections to nearly all the experiments on your side are
non-reproducability, no competing traffic (not even bothering to
measure web PLT in
that paper, for example), no competing upload traffic, and no
inclusion of the typical things that are latency sensitive at all
(voip, dns, tcp neg, ssl neg, etc).

with competing download and upload traffic, fq_codel *dramatically*
improves the responsiveness and utilization of the link, for all
traffic. Above 5mbits pretty much the only thing that matters for web
traffic is RTT, the google cite for this is around somewhere.

I tend to weigh low latency for every other form of traffic...
today... over marginal improvements in a contrived video download
scenario someday.

As for pie vs fq_codel, despite making pie widely available (for
example, it's shipped in the ubuntu 15.10 release) and just as easily
configurable in the sqm-scripts as fq_codel and cake are, it's been
very difficult to get users to stick with it for a decent a/b test.
This is perhaps the best comparison of the two I know of:

http://burntchrome.blogspot.com/2014/05/i-dont-like-this-pie-more-bufferbloat.html

The best hope I have for a set of subjective QoE experiences, is
perhaps this summer we'll have real pie and fq_codel capable modems
and be able to do a blind "taste test". The pie that we have to play
with is not enough like the draft to trust the results from. We're
still screwed in that the CMTSes are not fixed and in most of my test
sites, download bufferbloat is experienced more often than upload, in
one by a factor of 6000x1, so we need to compare three things -
default cmts behavior down + pie/fq_codel up. And with the cmts
throttled down (as we do it today) via htb + pie/fq_codel.

My core sadness in the flowqueue codel draft is that we spend way more
time talking about the caveats than the observed benefits.

In the case of torrent traffic, utp (especially) works well for me, on
my workloads. I run through a vpn all day without bothering to do
anything special about it. GF uploads big fat images all day without
ever bothering me. The gamers on campus are generally happy. (The only
prioritization I do is dns traffic).

The HUGE benefit of having the fq-aqm tech in place is how much better
it is than anything that exists in the field. I cannot remember the
last non-wifi buffering event I've had on any streaming media, nor the
last time netflix paused. I can't remember the last time I got annoyed
in a ssh or mosh shell. I hold videoconferences on a high rate
conference bridge (60mbit down/8 up) all the time on a 75mbit link
with "cake" while testing other stuff... but that is perhaps just me
and a rag-tag group of people that have bothered to actually install
and configure the software, and their workloads.

Without more implementations of all these bufferbloat fighting
technologies, made deployable, with more people trying them, we're not
going to get anywhere. There's still billions of boxes left to deploy
and a long tail of deployed gear that will probably last 10 years or
more.




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]