Huge system load using HTB

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

I have some problems with htb performance.

THE SETUP:
I have a network with 3 ISP uplinks and 1 local network uplink.
There are about 1700 clients.
I was shaping their bandwidth with HTB using iptables mangling in a manner:

tc class add dev $DEV parent 1:10 classid 1:${CLASS_ID} htb rate \
    16kbit ceil 512kbit burst 2kb prio 2 quantum 1500
tc qdisc add dev $DEV parent 1:${CLASS_ID} handle ${CLASS_ID}: \
    sfq perturb 10
tc filter add dev $DEV parent 1: protocol ip prio 17 u32 \
    match ip dst "$IP" flowid 1:${CLASS_ID}
iptables -A "$CHAIN_NAME" -t mangle -s "$IP" -j MARK --set-mark $CLASS_ID

I use iptables subchains, so that every chains contains 32 entries.

I have recently upgraded from RedHat 9.0 to Fedora Core 2. I cannot turn back to RH9, because I had other problems with that.
I use kernel 2.6.8-1.521 (the problem was the same with original kernel). I didn't recompile it.


THE PROBLEM:
When I load my rules the system load jumps to 100%.
I was testing it and I am certain that HTB does the mess.
The server with all iptables rules (including mangling) works well with load about 3%.
But just as I turn HTB on it starts to crawl.
The chart can be found here:
http://mtower.mlyniec.gda.pl/~spam/tst.png


It's fairly strong machine (P4 2.8 with HT, 1 GB RAM) and it worked with that setup quite well for half a year (system load never exceeded 30-40%).

I guess I haven't noticed something or the kernel has the bug.

Anybody clues?

Szymon Miotk
_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux