strange behaviour of qos

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

I have the following problem:
I've created qos script which shapes traffic
on outgoing interface eth1. More - less it looks like this :
------------------------CUT------------------------------------------------------


#root qdisc and class for eth1
$tc qdisc add dev eth1 root handle 1:0 htb default 19
$tc class add dev eth1 parent 1:0 classid 1:1 htp ratel ${CEIL_UP}kbit ceil ${CEIL_UP}kbit


#classes, qdiscs and filters for services
$tc class add dev eth1 parent 1:1 classid 1:11 htb rate 90kbit ceil 150kbit prio 0
$tc class add dev eth1 parent 1:1 classid 1:12 htb rate 100kbit ceil 250kbit prio 0
$tc class add dev eth1 parent 1:1 classid 1:13 htb rate 90kbit ceil 1250kbit prio 2



$tc qdisc add dev eth1 parent 1:11 handle 111: sfq perturb 10 #
$tc qdisc add dev eth1 parent 1:11 handle 112: sfq perturb 10 # Typical
$tc qdisc add dev eth1 parent 1:11 handle 113: sfq perturb 10 #



$tc filter add dev eth1 parent 1:0 protocol ip prio 0 handle 1 fw classid 1:11
$tc filter add dev eth1 parent 1:0 protocol ip prio 0 handle 1 fw classid 1:12
$tc filter add dev eth1 parent 1:0 protocol ip prio 2 handle 1 fw classid 1:13


the same thing goes for imq

#root qdisc and class for imq0
$tc qdisc add dev imq0 root handle 2:0 htb default 29
$tc class add dev imq0 parent 2:0 classid 2:1 htp ratel ${CEIL_DN}kbit ceil ${CEIL_DN}kbit


#classes, qdiscs and filters for services
$tc class add dev imq0 parent 2:1 classid 2:21 htb rate 90kbit ceil 150kbit prio 0
$tc class add dev imq0 parent 2:1 classid 2:22 htb rate 100kbit ceil 250kbit prio 0
$tc class add dev imq0 parent 2:1 classid 2:23 htb rate 90kbit ceil 1250kbit prio 2


 $tc qdisc add dev imq0 parent 2:11 handle 211: sfq perturb 10    #
 $tc qdisc add dev imq0 parent 2:11 handle 212: sfq perturb 10    #
 $tc qdisc add dev imq0 parent 2:11 handle 213: sfq perturb 10    #

$tc filter add dev imq0 parent 2:0 protocol ip prio 0 handle 0xb fw classid 2:21
$tc filter add dev imq0 parent 2:0 protocol ip prio 0 handle 0xc fw classid 2:22
$tc filter add dev imq0 parent 2:0 protocol ip prio 2 handle 0xd fw classid 2:23


-----------------------CUT--------------------------------------------------------------------------------


There are more of these classes - up to 19 (or 29 on imq0).
When I stat classes and qdiscs everything looks fine : traffic goes smoothly
through every class. Class 1:11 , and 2:21 are for icmp packets only.
The problem is - when I try to download some large file using http which goes
through 1:13 and 2:23 classes pings rise to very high values (~350 - 600 , while normally it should be something ~5-25 ms).
The situation is getting much worse when I allow p2p traffic (1:15, 2:25) to pass through. Although schedulers
seem to work ,because I can browse web pages, the whole interactivity is lost and output (and input)
bandwidth is consumed almost totally.


my system is : 2.4.29-ow1 , additional schedulers : esfq and wrr
p2p packets are `intercepted' by p2p and ipp2p modules.
Other packets are marked this way :
--------------CUT--------------------
$IPTABLES -A PREROUTING -t mangle -i eth1 -j IMQ --todev 0
[...]
$IPTABLES -A PREROUTING -t mangle -i eth1 -p icmp -j MARK --set-mark 0xb
$IPTABLES -A POSTROUTING -t mangle -o eth1 -p icmp -j MARK --set-mark 0x1
[...]
$IPTABLES -A PREROUTING -t mangle -i eth1 -m multiport --sport 80,443 -j MARK --set-mark 0xd
$IPTABLES -A POSTROUTING -t mangle -o eth1 -m multiport --dport 80,443 -j MARK --set-mark 0x3
[....]
-------------CUT----------------------------


any idea what might be wrong ?


thanks in advance Wlodek



_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux