[LARTC] CBQ and guaranteed bandwidth

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 I recently got a cable connection with 1mbps link capacity in the
metropolitan networks and 16kbps minimum guaranteed bandwidth outside,
maxim best effort 64kbps.

 I tried to create a setup on my linux box (which routes a local
network) that should do the above traffic allocations. I got the link
temporarily going faster for tests, a few days, but it seems that my
setup isn't working properly.

Internet traffic is adequantely classified with higher priority the
metropolitan traffic, so it gets sent first, which is ok. 

However, with this setup I am still unable to prevent a client from the
internal network from hogging the entire bandwidth, I mean he gets into
the best effort class, but the others don't seem to get the minimum
bandwidth that should be guaranteed.

What can I do, where's the mistake?

----------------------------------------------------

Configuration : 

# metropolitan networks are routed using policy routing. (table main
does not contain default route)

ip ro add metro_dest via local_gateway dev extf table 50 realm 50
ip ru add pref 10 lookup main
ip ru add pref 100 lookup 50

ip ro add default via local_gateway dev extf table 100 realm 100
ip ru add pref 1000 lookup 100

#ipchains does some packet marking:

#(as apparently marking in a rule with -j MASQ didn't work both ways...)

#(internal networks are skipped)
ipchains -A input -i iif -d localnets -j ACCEPT
ipchains -A input -i iif -d 0/0 -j metro

ipchains -A output -i iif -s localnets -j ACCEPT
ipchains -A output -i iif -s 0/0 -j metro

#(first make the international mark)
ipchains -A metro -j traffic2
#(if it's metropolitan destination, change mark)
ipchains -A metro -d metro_dest -b -j traffic1

ipchains -A traffic2 -s internal_ip_1 -b -j RETURN --mark inet_1
ipchains -A traffic1 -s internal_ip_1 -b -j RETURN --mark metro_1

#At this point : 
#packet from internal_ip_1 marked in traffic2 with inet_1
#if destination is metropolitan, marked changed to metro_1
#(bidirectional)

#####################################################################
Now the traffic allocation :

# eth1 is external iface
tc qdisc add dev eth1 root handle 1:0 cbq avpkt 1400 bandwidth 10mbit
cell 8

# Classes and corresponding filters
tc class add dev eth1 parent 1:0 classid 1:1 cbq bandwidth 10mbit rate
10mbit allot 1514 prio 1 cell 8 avpkt 1400 bounded

tc filter add dev eth1 parent 1:0 protocol ip prio 1 u32 match ip dst
0/0 flowid 1:1

# Total physical bandwidth of the link
tc class add dev eth1 parent 1:1 classid 1:2 cbq bandwidth 10mbit rate
1mbit weight 0.1mbit allot 1514 prio 1 cell 8 avpkt 1400 isolated
bounded
# all traffic goes in here too
tc filter add dev eth1 parent 1:1 protocol ip prio 2 u32 match ip dst
0/0 flowid 1:2

# Internet class
tc class add dev eth1 parent 1:2 classid 1:10 cbq bandwidth 10mbit rate
64kbit weight 6.4kbit allot 1514 prio 1 cell 8 avpkt 1400 maxburst 5
isolated bounded
# this class should contain the guaranteed bandwidth
tc class add dev eth1 parent 1:10 classid 1:100 cbq bandwidth 10mbit
rate 16kbit weight 1.6kbit allot 1514 prio 1 cell 8 avpkt 1400 isolated
bounded
# this class should get the best effort
tc class add dev eth1 parent 1:10 classid 1:101 cbq bandwidth 10mbit
rate 64kbit weight 6.4kbit allot 1514 prio 3 cell 8 avpkt 1400 sharing

tc qdisc add dev eth1 parent 1:100 handle 100: sfq perturb 1
tc qdisc add dev eth1 parent 1:101 handle 101: sfq perturb 1

# now I do this for each inet_xxx mark (for each client behind me)
SOURCE=/etc/clients/clients.full
IPS=`cat $SOURCE | grep -vE "^#|^$" | cut -f1`
for x in $IPS ; do
    IDS=`echo $x | cut -f3,4 -d '.' | tr -d '.'`
# each client should get 2kbps guaranteed bandwidth and it should do
dynamic allocation by borrowing traffic 
    tc class add dev eth1 parent 1:100 classid 1:2$IDS cbq bandwidth
10mbit rate 2kbit weight 0.2kbit allot 1514 prio 1 cell 8 avpkt 1400
isolated borrow
# this should put traffic up to 2kbps in this class, or skip to the best
effort class if it exceeds the limit
    tc filter add dev eth1 parent 1:2 protocol ip prio 1 handle 2$IDS fw
\
    police rate 2kbit burst 2k continue flowid 1:2$IDS
    tc filter add dev eth1 parent 1:2 protocol ip prio 2 handle 2$IDS fw
flowid 1:101
done

# in a similar way, I do the same for the metropolitan networks, where
there isn't a guaranteed bandwidth from the ISP, but it works very well
at higher speeds then my shaping 

tc class add dev eth1 parent 1:2 classid 1:20 cbq bandwidth 10mbit rate
256kbit weight 25kbit alloot 1514 prio 2 cell 8 avpkt 1400 maxburst 20
isolated bounded
tc qdisc add dev eth1 parent 1:20 handle 20: sfq perturb 1

# just place everyone in the metro class 1:20 as best effort. 
SOURCE=/etc/clients/clients.full
IPS=`cat $SOURCE | grep -vE "^#|^$" | cut -f1`
for x in $IPS ; do
    IDS=`echo $x | cut -f3,4 -d '.' | tr -d '.'`
    tc filter add dev eth1 parent 1:2 protocol ip prio 2 handle 1$IDS fw
flowid 1:20
done


[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux