While I'm thinking about that review of howto changes, here are a few other responses about things I don't believe. I'll be interested in more info if anyone has any. ==== [from new doc] Besides being classful, CBQ is also a shaper and it is in that aspect that it really doesn't work very well. It should work like this. I've not noticed that it doesn't work well. Perhaps cause I'm accidentally using the right parameters? If you try to shape a 10mbit/s connection to 1mbit/s, the link should be idle 90% of the time. If it isn't, we need to throttle so that it IS idle 90% of the time. Which is the way it does work, as far as I can tell. This is pretty hard to measure, so CBQ also needs to know how big an average packet is going to be, and instead derives the idle time from the number of microseconds that elapse between requests from the hardware layer for more data. Combined, this can be used to approximate how full or empty the link is. I can't believe this dependence on packet size since I've always had good results using the same default packet size even though different tests use very different packet sizes. tc class add dev eth1 parent 10:100 classid 10:2 cbq \ bandwidth 30Kbit rate 30Kbit allot 1514 weight 30Kbit prio 5 \ maxburst 10 avpkt 1000 bounded I send ping packets with default data size (56 bytes) which is 98 bytes per packet inc. mac, ip, icmp headers. [[new data to avoid problems with that in original reply]] In 10 seconds I get 413 replies, which I assume means 413 got sent (I enqueued 1000) That's (* 413 98 8 .1)=32.3kbps, pretty close. Now I try 1000 bytes of data and get 40 replies over 10 seconds (again enqueuing 1000 packets) (* 1042 8 40 .1)=33.3kpbs, again pretty close Finally, data size 1 gives me 981 replies over 10 seconds (this time I have to increase the rate in order to saturate the limit) (* 43 8 981 .1)=33.7kbps It's clearly not counting every packet as the same size!! ==== bandwidth The physical bandwidth of your device, also needed for idle time calculations. Notice above I supplied bandwidth 30kbit which is far from the actual physical bandwidth (100Mbit). Maybe this is why I get good results. Maybe this is what you're SUPPOSED to do! Recall in the experiment I reported to lartc 10/10 the correct bandwidth ended up giving me about twice the rate. I don't see from above explanation why that should be, but again it suggests that this parameter really ought to be set to the desired rate.