Alan Goodman wrote:
I added 'stab overhead 40 linklayer atm' to my root qdisc line since I am confident my connection uses LLC multiplexing. This transformed the hfsc based shaper to being the most accurate I have so far experienced. I am able to set tc upload limit by the sync speed, and tc downstream by the sync minus 12% which accounts for some rate limiting BT do on all lines (they limit downstream to 88.2% of sync rate).
If you have the choice of pppoa vs pppoe why not use a so you can use overhead 10 and be more efficient for upload. THe 88.2 thing is not atm rate, they do limit slightly below sync, but that is a marketing (inexact) approximate ip rate. If you were really matching their rate after allowing for overheads your incoming shaping would do nothing at all.
This works great in almost every test case except 'excessive p2p'. As a test I configured a 9mbit RATE and upper limit m2 10mbit on my bulk class. I then started downloading a CentOS torrent with very high maximum connection limit set. I see 10mbit coming in on my ppp0 interface however latency in my priority queue (sc umax 1412b dmax 20ms rate 460kbit) however my priority queue roundtrip is hitting 100+ms. Below is a clip from a ping session which shows what happens when I pause the torrent download.
Shaping from the wrong end of the bottleneck is not ideal, if you really care about latency you need to set lower limit for bulk and short queue length. As you have found hitting hard with many connections is the worse case. I never really go into hfsc so can't comment on that aspect, but I have in the past done a lot of shaping on BT adsl. In the early days of 288/576 it was very hard (for downstream). As the speeds get higher the easier it gets WRT latency - 20/60mbit vdsl2 is easy :-) -- To unsubscribe from this list: send the line "unsubscribe lartc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html