Hi Peter, thanks for looking at this.
Here are the information I got after running tests. The client1 got
7MB/s instead of 40MB/s for SEND,
and 40MB/s for RECV during the test.
Thanks,
william
# ip link show
...
5: eth2: <BROADCAST,MULTICAST,UP,LUP> mtu 9000 qdisc htb qlen 1000
link/ether 00:e0:ed:04:9f:a2 brd ff:ff:ff:ff:ff:ff
...
12: ifb0: <BROADCAST,NOARP,UP,LUP> mtu 9000 qdisc htb qlen 32
link/ether f2:f2:77:f9:cf:30 brd ff:ff:ff:ff:ff:ff
#tc qdisc show
qdisc pfifo_fast 0: dev eth0 root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1
1 1 1 1 1
qdisc ingress ffff: dev eth2 ----------------
qdisc htb 1: dev eth2 r2q 100 default 30 direct_packets_stat 0
qdisc pfifo_fast 0: dev eth3 root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1
1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1
1 1 1 1 1
qdisc pfifo_fast 0: dev eth5 root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1
1 1 1 1 1
qdisc pfifo_fast 0: dev eth6 root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1
1 1 1 1 1
qdisc pfifo_fast 0: dev eth7 root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1
1 1 1 1 1
qdisc pfifo_fast 0: dev eth8 root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1
1 1 1 1 1
qdisc htb 1: dev ifb0 r2q 100 default 30 direct_packets_stat 0
#tc -s -d class show dev ifb0
class htb 1:10 parent 1:1 prio 0 quantum 200000 rate 320000Kbit ceil
960000Kbit burst 169000b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 0b
overhead 0b level 0
Sent 2366125838 bytes 928639 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 925807 borrowed: 2832 giants: 0
tokens: 4224 ctokens: 4075
class htb 1:1 root rate 960000Kbit ceil 960000Kbit burst 489000b/64 mpu
0b overhead 0b cburst 489000b/64 mpu 0b overhead 0b level 7
Sent 36927678674 bytes 6132723 pkt (dropped 0, overlimits 0 requeues 0)
rate 2672bit 1pps backlog 0b 0p requeues 0
lended: 1131873 borrowed: 0 giants: 0
tokens: 4074 ctokens: 4074
class htb 1:30 parent 1:1 prio 1 quantum 200000 rate 640000Kbit ceil
960000Kbit burst 328960b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 0b
overhead 0b level 0
Sent 34561552836 bytes 5204084 pkt (dropped 44, overlimits 0 requeues 0)
rate 528bit 0pps backlog 0b 0p requeues 0
lended: 4075043 borrowed: 1129041 giants: 0
tokens: 4108 ctokens: 4074
#tc -s -d class show dev eth2
class htb 1:10 parent 1:1 prio 0 quantum 200000 rate 320000Kbit ceil
960000Kbit burst 169000b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 0b
overhead 0b level 0
Sent 12092794712 bytes 1544210 pkt (dropped 0, overlimits 0 requeues 0)
rate 56bit 0pps backlog 0b 0p requeues 0
lended: 1543687 borrowed: 523 giants: 0
tokens: 4224 ctokens: 4075
class htb 1:1 root rate 960000Kbit ceil 960000Kbit burst 489000b/64 mpu
0b overhead 0b cburst 489000b/64 mpu 0b overhead 0b level 7
Sent 36872760531 bytes 7346321 pkt (dropped 0, overlimits 0 requeues 0)
rate 288bit 0pps backlog 0b 0p requeues 0
lended: 40477 borrowed: 0 giants: 0
tokens: 4073 ctokens: 4073
class htb 1:30 parent 1:1 prio 1 quantum 200000 rate 640000Kbit ceil
960000Kbit burst 328960b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 0b
overhead 0b level 0
Sent 24779965819 bytes 5802111 pkt (dropped 0, overlimits 0 requeues 0)
rate 176bit 0pps backlog 0b 0p requeues 0
lended: 5762157 borrowed: 39954 giants: 0
tokens: 4109 ctokens: 4073
Peter Rabbitson wrote:
William Xu wrote:
So TC works well as long as total bandwidth is below 90MB/s, which is
about 70% of the
wise speed. Is it possible that I can use the full bandwidth
(122MB/s) in my script?
In order to troubleshoot further more info is needed:
1) execute your script with 120MB/s as limit
2) perform a test transfer for several minutes
3) post back the output of the following commands:
ip link show
tc qdisc show
tc -s -d class show dev ifb0
tc -s -d class show dev eth2
Peter
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc