From: Peilin Ye <peilin.ye@xxxxxxxxxxxxx> Recently we introduced a Qdisc backpressure infrastructure (currently supports UDP sockets). Use it in CBQ Qdisc. Tested with 500 Mbits/sec rate limit using 16 iperf UDP 1 Gbit/sec clients. Before: [ 3] 0.0-15.0 sec 55.8 MBytes 31.2 Mbits/sec 1.185 ms 1073326/1113110 (96%) [ 3] 0.0-15.0 sec 55.9 MBytes 31.3 Mbits/sec 1.001 ms 1080330/1120201 (96%) [ 3] 0.0-15.0 sec 55.6 MBytes 31.1 Mbits/sec 1.750 ms 1078292/1117980 (96%) [ 3] 0.0-15.0 sec 55.3 MBytes 30.9 Mbits/sec 0.895 ms 1089200/1128640 (97%) <...> ^^^^^^^^^^^^^^^^^^^^^ Total throughput is 493.7 Mbits/sec and average drop rate is 96.13%. Now enable Qdisc backpressure for UDP sockets, with udp_backpressure_interval default to 100 milliseconds: [ 3] 0.0-15.0 sec 54.2 MBytes 30.3 Mbits/sec 2.302 ms 54/38692 (0.14%) [ 3] 0.0-15.0 sec 54.1 MBytes 30.2 Mbits/sec 2.227 ms 54/38671 (0.14%) [ 3] 0.0-15.0 sec 53.5 MBytes 29.9 Mbits/sec 2.043 ms 57/38203 (0.15%) [ 3] 0.0-15.0 sec 58.1 MBytes 32.5 Mbits/sec 1.843 ms 1/41480 (0.0024%) <...> ^^^^^^^^^^^^^^^^^ Total throughput is 497.1 Mbits/sec (0.69% higher), average drop rate is 0.08% (99.9% lower). Fairness between flows is slightly affected, with per-flow average throughput ranging from 29.9 to 32.6 Mbits/sec (compared with 30.3 to 31.3 Mbits/sec). Signed-off-by: Peilin Ye <peilin.ye@xxxxxxxxxxxxx> --- net/sched/sch_cbq.c | 1 + 1 file changed, 1 insertion(+) diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c index 91a0dc463c48..42e44f570988 100644 --- a/net/sched/sch_cbq.c +++ b/net/sched/sch_cbq.c @@ -381,6 +381,7 @@ cbq_enqueue(struct sk_buff *skb, struct Qdisc *sch, return ret; } + qdisc_backpressure(skb); if (net_xmit_drop_count(ret)) { qdisc_qstats_drop(sch); cbq_mark_toplevel(q, cl); -- 2.20.1