Hi, I'm experimenting using the HTB queueing discipline for traffic shaping. However, it is not really exact. Currently, I try this setup: tc qdisc add dev eth0 root handle 1: htb default 3 tc class add dev eth0 parent 1: classid 1:1 htb rate 20Mbit burst 4kB tc qdisc add dev eth0 parent 1:1 sfq tc class add dev eth0 parent 1: classid 1:2 htb rate 3Mbit burst 2kB tc qdisc add dev eth0 parent 1:2 sfq tc class add dev eth0 parent 1: classid 1:3 htb rate 77Mbit burst 150kB tc qdisc add dev eth0 parent 1:3 sfq tc filter add dev eth0 parent 1:0 prio 7 protocol ip handle 1 fw classid 1:1 tc filter add dev eth0 parent 1:0 prio 7 protocol ip handle 2 fw classid 1:2 /usr/local/sbin/iptables -t mangle -F OUTPUT /usr/local/sbin/iptables -t mangle -A OUTPUT -p tcp --dport 9021 -j MARK --set-mark 1 /usr/local/sbin/iptables -t mangle -A OUTPUT -p tcp --dport 9022:9023 -j MARK --set-mark 2 The network adaptor is connected to a 100MBit switch. When testing with netio, I can send up to 370kB/sec through class 1:2 and up to 2,4MB/sec via class 1:1, both measured by one/multiple instances of netio and the rate output of "tc -s class dev eth0". This effect occurs with Linux 2.4.16, kernel either compiled with HZ set to 100 or 1024, and of course independant of the filter type used. During the tests, no packets need to be dropped, htb just delays. Output when two netio instances are sending to class 1:2 for some time: root@xxxxxxxxx /root/ >tc -s class list dev eth0 [...] class htb 1:2 root leaf 800e: prio 0 rate 3Mbit ceil 3Mbit burst 2Kb cburst 2Kb Sent 236734498 bytes 156441 pkts (dropped 0, overlimits 301823) rate 378286bps 250pps backlog 44p lended: 156397 borrowed: 0 giants: 0 injects: 0 tokens: -3714 ctokens: -3714 [...] tbf seems not to be able to do exact rate limiting, too. Anything wrong in my configuration, or am I just taking false statistics? Stefan