Re: should we consider making CoDel the default to combat bufferbloat?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28/03/15 04:40, Dennis Jacobfeuerborn wrote:
> On 28.03.2015 04:42, Zbigniew Jędrzejewski-Szmek wrote:
>> On Sat, Mar 28, 2015 at 12:34:35AM +0000, Pádraig Brady wrote:
>>> Following on from http://thread.gmane.org/gmane.linux.redhat.fedora.kernel/5549
>>> Has there been any more consideration for enabling this by default?
>> It's the default in F22+:
>>
>> /usr/lib/sysctl.d/50-default.conf:net.core.default_qdisc = fq_codel
> 
> Are there any performance comparisons out there for fq_codel compared to
> let's say mq? Does fq_codel utilize multiple cores/nic-queues effectively?

The following is from Dave Taht <dave.taht@xxxxxxxxx> ...

mq is generally attached to several different hardware queues on the
device or device driver. Below it, is attached another qdisc to
actually manage the packets, the default being pfifo_fast unless
changed with the default qdisc sysctl.

Modern versions of the tc command will show you those underlying qdiscs [1],
and their stats. (which can be quite interesting, no matter the qdisc)

So the mq behavior as for multiple cpus (which is parallel) remains
the same regardless of the underlying qdisc. As for any inherent
per-cpu parallelization for fq_codel, no, there is none. ( I would
like one day to see the hashing and timestamping all done in parallel
on the rx path)

I note that having a ton of hardware queues is sometimes not as good
as having a single queue or a direct queue to cpu mapping, using
sch_fq or fq_codel. The underlying BQL buffering (and thus delay) is
additive per hardware queue, as are adding sch_fq/fq_codel buffers per
hardware queue, so it is quite possible you will get less latency and
better throughput with less or even 1, hardware queues, if you have
cpus and workloads that can drive that scenario well.

There was a long thread on netdev 6 months or so back comparing NFS
performance with hardware multi-queue (with hash collisions on the
limited number of hardware queues) vs sch_fq(unlimited queues) (or
fq_codel 1024 by default).  I can try to find that thread and
resulting paper if you like.

As always, I encourage folk to merely try it, and try using benchmarks
that test for latency with load, like netperf-wrapper, or pick your
favorite network stressing workload, see what happens, and get back to
the codel or bloat email lists with your results.

I generally find myself using a diffserv filter to distribute among
the hardware queues in many scenarios rather than something that does
it via 5 tuple.

[1]

d@lakshmi:~/git$ tc -s qdisc show dev wlan0
qdisc mq 0: root
 Sent 324546057 bytes 471799 pkt (dropped 0, overlimits 0 requeues 16)
 backlog 0b 0p requeues 16
qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514
target 5.0ms interval 100.0ms ecn
 Sent 106456 bytes 1865 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :2 limit 10240p flows 1024 quantum 1514
target 5.0ms interval 100.0ms ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :3 limit 10240p flows 1024 quantum 1514
target 5.0ms interval 100.0ms ecn
 Sent 324439601 bytes 469934 pkt (dropped 0, overlimits 0 requeues 16)
 backlog 0b 0p requeues 16
  maxpacket 1514 drop_overlimit 0 new_flow_count 632 ecn_mark 0
  new_flows_len 1 old_flows_len 0
qdisc fq_codel 0: parent :4 limit 10240p flows 1024 quantum 1514
target 5.0ms interval 100.0ms ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0


-- Dave Täht Let's make wifi fast, less jittery and reliable again!
https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
-- 
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux