Re: Traffic shaping on multiple interfaces

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Terry Baume wrote:
Andy Furniss wrote:
This won't work on ifb0. You could put it on eth1 and change the "protocol ip" to "protocol all" and change the match to "match u32 0 0" . In theory that may catch a few arp so I suppose you could add another rule to exempt them.

tc filter add dev eth1 parent ffff: protocol arp prio 1 u32 match u32 0 0 flowid :2

I would also make the burst bigger since you have quite high ingress bandwidth.

Andy.
Thanks for the suggestions Andy, I've put them all into place, and everything seems to be working nicely. I just had a question regarding the rule to catch the ARP's - I get this message when I add the rule:

RTNETLINK answers: File exists

I guess this because the parent qdisc for that interface has already been defined - does this mean that the rule won't work? Or is it just a debugging notice that can be safely ignored?

You shouldn't get an error for that - as long as you change the download part of the script to something like -

tc qdisc del dev eth1 ingress &>/dev/null

tc qdisc add dev eth1 handle ffff: ingress

tc filter add dev eth1 parent ffff: protocol arp prio 1 u32 match u32 0 0 flowid :2

tc filter add dev eth1 parent ffff: protocol all prio 50 u32 match u32 0 0 police rate ${DOWNLINK}kbit burst 50k drop flowid :1


I've also raised my burst to 50k. Implementing the downstream shaping techniques you suggested, I am seeing good results.

Good - I guess 50k is OK as it's a wan with wan latency and you are shaping ingress from a bitrate limited line, so the virtual buffer only fills slowly.

If you really wanted to have more control than just policing the whole link, you could use another ifb and shape the traffic as you do on egress - it won't be quite the same, though, because you will be shaping from the wrong end of the bottleneck.

FWIW if it were a lan then 50k @ 17mbit totally borks a single tcp connection so you'll get nowhere near the set rate. It's because the latencies are so low I guess - netem 10ms delay on and all is well again - or make the burst bigger.


I had a question regarding the ifb device - does it behave as a normal interface when doing masquerading etc?

I don't think iptables will see it as a real device.

It seems that if I place destination IP's in the NOPRIOHOSTDST field, these get marked as low priority, and the same when I put source ports in NOPRIOPORTSRC. When I put source IP's in the NOPRIOHOSTSRC field, these do not seem to get marked as low priority - I tried with 192.168.0.1 as an example (an FTP server on the network). Could this be related to the fact that I'm using the IFB device?

No it's because if you do SNAT/MASQUERADE the the addresses have already been changed - it would still happen if you shaped directly on ppp0.

To workaround you need to use iptables rules to mark the traffic and then make tc filter rules to match the marks eg.

iptables -t mangle -A POSTROUTING --src 192.168.0.1 -j MARK --set-mark 35

tc filter add dev ifb0 parent 1:0 prio 27 protocol ip handle 35 fw flowid 1:X

depending on what other rules are used you'll need to change the prio to something unused.

Andy.
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux