Terry Baume wrote:
Hi Andy,
I had a chance to play around with ifb and the wondershaper script so
far, I've come to realise a few things, one being related to what you
previously mentioned about wondershaper being somewhat flawed in
particular setups. I've included my entire modified wondershaper script
at the end of this mail so that you can see my modifications.
Your suggestion of redirecting traffic to ifb0 works wonderfully, keeps
latency down very low even with 2 concurrent FTP transfers - one over
ppp0 and one over eth1.
I start to notice problems however when I have a large amount of traffic
in the lowest priority group (1:30) as well as traffic in (1:20)
competing for bandwidth - Is this what you meant by wondershaper being
slightly flawed? If I classify all traffic directed to 10.25.0.0/25 as
the lowest priority, by adding it to the 'NOPRIOHOSTDST' option, I
notice that if there is an FTP connection from 10.25.0.0/25, as well as
one coming over ppp0, latency rises very high again. I presume this is
related to the following few lines in the script:
# bulk & default class 1:20 - gets slightly less traffic,
# and a lower priority:
tc class add dev ifb0 parent 1:1 classid 1:20 htb rate
$[9*$UPLINK/10]kbit \
burst 6k prio 2
tc class add dev ifb0 parent 1:1 classid 1:30 htb rate
$[8*$UPLINK/10]kbit \
burst 6k prio 2
I'm not sure if I am reading into this correctly, but it seems to
suggest that combined, these 2 classes have more bandwidth than the link
itself?
Yes you are right htb rates on leafs are not restricted by the parent
rate. I guess when wondershaper was written htb was new and not in
kernel. It may work with the cbq version - not that I have ever tested cbq.
Do you have any suggestions as to how I can modify the script to work
around these problems - ie so that a stream of bulk (1:30) and a stream
of regular (1:20) traffic will not cause high latency?
You need to make the rates of the child classes add up to the parents
rate. To let them borrow spare bandwidth you can use the ceil parameter.
If you still get high latency reduce UPLINK a bit more - dsl has quite
high overheads.
tc class add dev ifb0 parent 1:1 classid 1:10 htb rate ${UPLINK}kbit \
burst 6k prio 1
... rate $[6*$UPLINK/10]kbit ceil ${UPLINK}kbit ...
# bulk & default class 1:20 - gets slightly less traffic,
# and a lower priority:
tc class add dev ifb0 parent 1:1 classid 1:20 htb rate
$[9*$UPLINK/10]kbit \
burst 6k prio 2
... rate $[2*$UPLINK/10]kbit ceil ${UPLINK}kbit ...
tc class add dev ifb0 parent 1:1 classid 1:30 htb rate
$[8*$UPLINK/10]kbit \
burst 6k prio 2
... rate $[2*$UPLINK/10]kbit ceil ${UPLINK}kbit ...
########## downlink #############
# slow downloads down to somewhat less than the real speed to prevent
# queuing at our ISP. Tune to see how high you can set it.
# ISPs tend to have *huge* queues to make sure big downloads are fast
#
# attach ingress policer:
tc qdisc add dev ifb0 handle ffff: ingress
# filter *everything* to it (0.0.0.0/0), drop everything that's
# coming in too fast:
tc filter add dev ifb0 parent ffff: protocol ip prio 50 u32 match ip src \
0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1
This won't work on ifb0. You could put it on eth1 and change the
"protocol ip" to "protocol all" and change the match to "match u32 0 0"
. In theory that may catch a few arp so I suppose you could add another
rule to exempt them.
tc filter add dev eth1 parent ffff: protocol arp prio 1 u32 match u32 0
0 flowid :2
I would also make the burst bigger since you have quite high ingress
bandwidth.
Andy.
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc