Hello,
I know this list gets a lot of traffic so 'll try
and keep this brief.
I've been working on what I initially thought
should be a simple problem, but have been unable to produce satisfactory
results. In fact, I'm now wondering wether it can be done at all -- so I'm
mailing the experts on this list :). Perhaps someone can tell me wether what I
want to do is achieveable or not.
In summary what I am trying to do is:
* When a UDP stream appears prioritise UDP traffic
to the detrmient of all other traffic, even if it means dropping packets from
other streams. In otherwords, an uninterruptable UDP stream despite other
traffic.
Sounds simple? Hrm, not so it seems. I've read the
Advanced Routing HOW-TO, the HTB documentation, and worked off great base
scripts such as wondershaper. I've tried CBQ and HTB, including leaf HTB
structures, I've mixed and match prios and pfifos and tbfs and sfqs, toggled
with various rates and policed the ingress. All to no avail. The results always
seem to be the same: heavy traffic interrupts the UDP stream, causing almost
constant spikes. The stream *is* better than no shaping at all, but the UDP
stream is never given such a priority that it *cannot* be interrupted, which is
what I'm after.
The closest I have got to achieving an almost
uninterruptable UDP stream is to cap the bandwidth of the other streams, either
using CBQ/HTB on the parents or tbf on the children, but this defeats the
purpose somewhat -- when a UDP stream *isn't* present the 'bulk' traffic is now
always limited to a rate far less that what the link is capable of.
What I want is to have all traffic use the link's
full bandwidth except when a UDP stream appears where the bulk traffic rate
should dynamically drop or packets should be dropped in favour of creating an
uninterruptible UDP stream.
Can this be done at all?
Here's what I've got so far, any input greatly
appreciated.
The packets flow through a gateway, so I can
control both in and out streams by attaching qdiscs to both eth0 and eth1. With
that in mind after testing numerous combinations I've settled on using just prio
over an CBQ/HTB solution (since it is supposed to attend to other priority
classes only if lower ones have been attended to). Note the forced rate limit
with tbf for upstream bulk. Link bandwidth is 128/512, eth0 upstream (DSL modem)
eth1 down. And lastly I've tried with and without an ingress on eth0, and found
it more effective with. This is my eth0 script based off
wondershaper:
--- tc qdisc add dev eth0 root handle 1: prio tc qdisc add dev eth0 parent 1:1 handle 10:
pfifo
tc qdisc add dev eth0 parent 1:2 handle 20: tbf rate 40kbit latency 25ms buffer 4096 tc qdisc add dev eth0 parent 1:3 handle 30: tbf rate 50kbit latency 25ms buffer 4096 # UDP
tc filter add dev eth0 parent 1: protocol ip prio 10 u32 \ match ip protocol 17 0xff flowid 1:1 # ACK, WEB etc
tc filter add dev eth0 parent 1: protocol ip prio 10 u32 \ match ip tos 0x10 0xff flowid 1:2 tc filter add dev eth0 parent 1: protocol ip prio 10 u32 \ match ip dport 80 0xff flowid 1:2 tc filter add dev eth0 parent 1: protocol ip prio 10 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u16 0x0000 0xffc0 at 2 \ match u8 0x10 0xff at 33 \ flowid 1:2 # BULK
tc filter add dev eth0 parent 1: protocol ip prio 18 u32 \ match ip dst 0.0.0.0/0 flowid 1:3 # INGRESS
tc qdisc add dev eth0 handle ffff: ingress tc filter add dev eth0 parent ffff: protocol ip
prio 50 u32 match ip src \
0.0.0.0/0 police rate 400kbit burst 2k drop flowid :1 --- The eth1 script is the same except for: a) no ingress b) sport for web and c) sfq is used instead of tbf for prios 2 and 3, so no rate limiting is applied (handled by ingress on eth0). This seems to work but only if I limit my ingress to around 350kbit, which is drastically less than what the link is capable of (even without a UDP stream present downloads now come down at about 1/3 of the possible bandwidth :/). Even at 400kbit the UDP downstream starts getting interrupted majorly by other traffic, despite prioritising. From what I have read, especially in regards to prio, this shouldn't be happening. Insight, wisdom, and suggestions for a solution
greatly appreciated!
Ashton
|