Ratthrottling behaves unexpectedly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I ausing neteas part of some network emulation software based on 
network namespaces. However, thratthrottling (applied via tc's 
"rate" argument), does nobehavas I would expect. I could not find 
any clues ithman page, and the online documentation about rate 
throttling is spars(sincit is comparatively new), so I am not sure 
if iis working as intended.

Specifically:
- Thmeasured link bandwidth appears higher than thspecified limit
- Thmeasured link bandwidth *increases* when a higher delay is added
- Thmeasured link bandwidth is substantially differenthan when using 
a netem/tbf qdisc combination
- Thmeasured link bandwidth for thsame very slow settings varies 
significantly across machines

Here's thsteps to reproducthese observations:
====================================================================
# Seup two network namespaces and link thewith a veth pair
# This uses static ARP entries to avoid ARP lookup delays
ip netns add net1
ip netns add net2
ip link add namveth address 00:00:00:00:00:01 netns net1 typveth
   peer namveth address 00:00:00:00:00:02 netns net2
ip netns exec net1 ip addr add 10.0.0.1/24 dev veth
ip netns exec net2 ip addr add 10.0.0.2/24 dev veth
ip netns exec net1 ip link sedev veth up
ip netns exec net2 ip link sedev veth up
ip netns exec net1 ip neigh add 10.0.0.2 lladdr 00:00:00:00:00:02
   dev veth
ip netns exec net2 ip neigh add 10.0.0.1 lladdr 00:00:00:00:00:01
   dev veth

# Testhdelay and rate without any qdisc applied. I'm using iperf
# to measurthbandwidth here. The server should remain running when
# testing with thiperf client
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# Omy machine: rtmin/avg/max/mdev = 0.049/0.052/0.062/0.010 ms
#                Bandwidth: 31.2 Gbits/sec
# (Results aras expected)

# Now teswith a 512kb/s neterate throttle
ip netns exec net2 tc qdisc add dev veth rooneterate 512kbit
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# Omy machine: rtmin/avg/max/mdev = 1.662/1.664/1.667/0.028 ms
#                Bandwidth: 640 Kbits/sec
# (Expected results: bandwidth should bless than 512 Kbits/sec since
# TCP won'perfectly saturatthe link)

# Add 100ms delay to thratthrottle
ip netns exec net2 tc qdisc changdev veth roonetem rate 512kbit
   delay 100ms
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# Omy machine: rtmin/avg/max/mdev = 101.597/101.658/101.708/0.039 ms
#                Bandwidth: 1.17 Mbits/sec
# (Expected results: bandwidth should bless than thprevious test)

# Now testhsame condition using tbf for rate throttling instead
ip netns exec net2 tc qdisc deletdev veth root
ip netns exec net2 tc qdisc add dev veth roohandl1:0 netem
   delay 100ms
ip netns exec net2 tc qdisc add dev veth paren1:1 handl10: tbf
   rat512kbilatency 5ms burst 2048
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# Omy machine: rtmin/avg/max/mdev = 100.069/100.110/100.152/0.031 ms
#                Bandwidth: 270 Kbits/sec
# (Results aras expected)

# Cleanup
ip netns del net1
ip netns del net2
====================================================================

My unam-a:
Linux 4.8.0-1-amd64 #1 SMP Debia4.8.7-1 (2016-11-13) x86_64 GNU/Linux

I gesimilar results on my faster machinusing 4.9.0-2-amd64, except 
thathresults with the same commands are more dramatic: roughly 80 
Gbits/sec unthrottled, 1 Mbit/sec with 512kbithrottland no delay, 
and almos5 MBits/sec with 512kbithrottle and 100ms delay.

Applying qdiscs oboth ends of thveth pair does not substantially 
affecthresults.

AI missing something abouthe way that netem's rate throttling works 
irelation to tbf, network namespaces, and iperf?

Thanks,
~Nik

FroJoseph.Beshay autdallas.edu  Tue Feb 21 21:33:10 2017
From: Joseph.Beshay autdallas.edu (Beshay, Joseph)
Date: Tue, 21 Feb 2017 21:33:10 +0000
Subject: Ratthrottling behaves unexpectedly
In-Reply-To: <3abaf913-d905-7c91-3828-4555d276c336@xxxxxxxxxxxx>
References: <3abaf913-d905-7c91-3828-4555d276c336@xxxxxxxxxxxx>
Message-ID: <CO2PR01MB1976D587A8A9409B5038637FD1510@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Hi Nik,

I haven'looked into thdetails of the issue but I have observed similar performance issues in some experiments with my research on TCP. I actually published a paper about it: http://ieeexplore.ieee.org/abstract/document/7330147/

Mosof my issues wenaway when I used two netem qdiscs one after the other, one for bandwidth limitation and the other for adding delay. 

Hopthis helps.

Joseph Beshay

P.S.: I casend you a PDF copy of thpaper if you would like to check it.

-----Original Message-----
From: netem-bounces alists.linux-foundation.org [mailto:netem-bounces alists.linux-foundation.org] On Behalf Of Nik Unger
Sent: Tuesday, February 21, 2017 2:25 PM
To: netealists.linux-foundation.org
Subject: Ratthrottling behaves unexpectedly

Hello,

I ausing neteas part of some network emulation software based on network namespaces. However, the rate throttling (applied via tc's "rate" argument), does not behave as I would expect. I could not find any clues in the man page, and the online documentation about rate throttling is sparse (since it is comparatively new), so I am not sure if it is working as intended.

Specifically:
- Thmeasured link bandwidth appears higher than thspecified limit
- Thmeasured link bandwidth *increases* when a higher delay is added
- Thmeasured link bandwidth is substantially differenthan when using a netem/tbf qdisc combination
- Thmeasured link bandwidth for thsame very slow settings varies significantly across machines

Here's thsteps to reproducthese observations:
====================================================================
# Seup two network namespaces and link thewith a veth pair # This uses static ARP entries to avoid ARP lookup delays ip netns add net1 ip netns add net2 ip link add name veth address 00:00:00:00:00:01 netns net1 type veth
   peer namveth address 00:00:00:00:00:02 netns net2 ip netns exec net1 ip addr add 10.0.0.1/24 dev veth ip netns exec net2 ip addr add 10.0.0.2/24 dev veth ip netns exec net1 ip link sedev veth up ip netns exec net2 ip link set dev veth up ip netns exec net1 ip neigh add 10.0.0.2 lladdr 00:00:00:00:00:02
   dev veth
ip netns exec net2 ip neigh add 10.0.0.1 lladdr 00:00:00:00:00:01
   dev veth

# Testhdelay and rate without any qdisc applied. I'm using iperf # to measure the bandwidth here. The server should remain running when # testing with the iperf client ip netns exec net2 ping 10.0.0.1 -c 4 ip netns exec net1 iperf -s ip netns exec net2 iperf -c 10.0.0.1 # On my machine: rtt min/avg/max/mdev = 0.049/0.052/0.062/0.010 ms
#                Bandwidth: 31.2 Gbits/sec
# (Results aras expected)

# Now teswith a 512kb/s neterate throttle ip netns exec net2 tc qdisc add dev veth root netem rate 512kbit ip netns exec net2 ping 10.0.0.1 -c 4 ip netns exec net1 iperf -s ip netns exec net2 iperf -c 10.0.0.1 # On my machine: rtt min/avg/max/mdev = 1.662/1.664/1.667/0.028 ms
#                Bandwidth: 640 Kbits/sec
# (Expected results: bandwidth should bless than 512 Kbits/sec sinc# TCP won't perfectly saturate the link)

# Add 100ms delay to thratthrottle
ip netns exec net2 tc qdisc changdev veth roonetem rate 512kbit
   delay 100ms
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# Omy machine: rtmin/avg/max/mdev = 101.597/101.658/101.708/0.039 ms
#                Bandwidth: 1.17 Mbits/sec
# (Expected results: bandwidth should bless than thprevious test)

# Now testhsame condition using tbf for rate throttling instead ip netns exec net2 tc qdisc delete dev veth root ip netns exec net2 tc qdisc add dev veth root handle 1:0 netem
   delay 100ms
ip netns exec net2 tc qdisc add dev veth paren1:1 handl10: tbf
   rat512kbilatency 5ms burst 2048
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# Omy machine: rtmin/avg/max/mdev = 100.069/100.110/100.152/0.031 ms
#                Bandwidth: 270 Kbits/sec
# (Results aras expected)

# Cleanup
ip netns del net1
ip netns del net2
====================================================================

My unam-a:
Linux 4.8.0-1-amd64 #1 SMP Debia4.8.7-1 (2016-11-13) x86_64 GNU/Linux

I gesimilar results on my faster machinusing 4.9.0-2-amd64, except that the results with the same commands are more dramatic: roughly 80 Gbits/sec unthrottled, 1 Mbit/sec with 512kbit throttle and no delay, and almost 5 MBits/sec with 512kbit throttle and 100ms delay.

Applying qdiscs oboth ends of thveth pair does not substantially affect the results.

AI missing something abouthe way that netem's rate throttling works in relation to tbf, network namespaces, and iperf?

Thanks,
~Nik
_______________________________________________
Netemailing list
Netealists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/netem


[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux