[PATCH RFC v2 net-next 3/5] net/sched: sch_tbf: Use Qdisc backpressure infrastructure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Peilin Ye <peilin.ye@xxxxxxxxxxxxx>

Recently we introduced a Qdisc backpressure infrastructure (currently
supports UDP sockets).  Use it in TBF Qdisc.

Tested with 500 Mbits/sec rate limit and SFQ inner Qdisc using 16 iperf UDP
1 Gbit/sec clients.  Before:

[  3]  0.0-15.0 sec  53.6 MBytes  30.0 Mbits/sec   0.208 ms 1190234/1228450 (97%)
[  3]  0.0-15.0 sec  54.7 MBytes  30.6 Mbits/sec   0.085 ms   955591/994593 (96%)
[  3]  0.0-15.0 sec  55.4 MBytes  31.0 Mbits/sec   0.170 ms  966364/1005868 (96%)
[  3]  0.0-15.0 sec  55.0 MBytes  30.8 Mbits/sec   0.167 ms   925083/964333 (96%)
<...>                                                         ^^^^^^^^^^^^^^^^^^^

Total throughput is 480.2 Mbits/sec and average drop rate is 96.5%.

Now enable Qdisc backpressure for UDP sockets, with
udp_backpressure_interval default to 100 milliseconds:

[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.097 ms 450/39246 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.331 ms 435/39232 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.040 ms 435/39212 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.031 ms 426/39208 (1.1%)
<...>                                                       ^^^^^^^^^^^^^^^^

Total throughput is 486.4 Mbits/sec (1.29% higher) and average drop rate
is 1.1% (98.86% lower).

However, enabling Qdisc backpressure affects fairness between flow if we
use TBF Qdisc with default bfifo inner Qdisc:

[  3]  0.0-15.0 sec  46.1 MBytes  25.8 Mbits/sec   1.102 ms 142/33048 (0.43%)
[  3]  0.0-15.0 sec  72.8 MBytes  40.7 Mbits/sec   0.476 ms 145/52081 (0.28%)
[  3]  0.0-15.0 sec  53.2 MBytes  29.7 Mbits/sec   1.047 ms 141/38086 (0.37%)
[  3]  0.0-15.0 sec  45.5 MBytes  25.4 Mbits/sec   1.600 ms 141/32573 (0.43%)
<...>                                                       ^^^^^^^^^^^^^^^^^

In the test, per-flow throughput ranged from 16.4 to 68.7 Mbits/sec.
However, total throughput was still 486.4 Mbits/sec (0.87% higher than
before), and average drop rate was 0.41% (99.58% lower than before).

Signed-off-by: Peilin Ye <peilin.ye@xxxxxxxxxxxxx>
---
 net/sched/sch_tbf.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index 72102277449e..cf9cc7dbf078 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -222,6 +222,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
 		len += segs->len;
 		ret = qdisc_enqueue(segs, q->qdisc, to_free);
 		if (ret != NET_XMIT_SUCCESS) {
+			qdisc_backpressure(skb);
 			if (net_xmit_drop_count(ret))
 				qdisc_qstats_drop(sch);
 		} else {
@@ -250,6 +251,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 	}
 	ret = qdisc_enqueue(skb, q->qdisc, to_free);
 	if (ret != NET_XMIT_SUCCESS) {
+		qdisc_backpressure(skb);
 		if (net_xmit_drop_count(ret))
 			qdisc_qstats_drop(sch);
 		return ret;
-- 
2.20.1




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux