Thanks Andy, that makes a world of difference. I can't believe I didn't know this was built into 2.6.32- I thought I had to patch Brouer & Stuart's stuff into what has so far been a standard Debian platform. Changing from: tc qdisc add dev tun0 root handle 130 htb default 510 to: tc qdisc add dev tun0 root handle 130 stab overhead -4 mtu 1500 mpu 53 linklayer atm htb default 510 finally allows me to keep low latency on my high-priority class under a higher rate of small packets. I don't know if those stab arguments are exactly right but it seems to work great. It even allows me to drop the overhead on our QoS from 40% to 5% (what we do by default)- so if my connections give me 10 Mbps, for example, I can set my root class rate to 9.5 Mbps, not 6 Mbps. Summary of results- under bulk TCP upload and download and bi-directional 200 Kbps flow of 60 byte UDP packets (about 400 pkts/sec). Without ATM calculations Ping results 50 packets transmitted, 46 received, 8% packet loss, time 49011ms rtt min/avg/max/mdev = 119.912/695.257/1755.401/347.332 ms, pipe 2 Iperf VoIP simulation results [ 3] 0.0-59.7 sec 1.31 MBytes 184 Kbits/sec 77.825 ms 2151/25000 (8.6%) [ 3] 0.0-59.7 sec 10892 datagrams received out-of-order Summary: 700 ms delay, 350 ms jitter, 8.6% packet loss, and 43% packet reordering. VoIP would not work at all in this scenario. With ATM calculations Ping results 50 packets transmitted, 49 received, 2% packet loss, time 49068ms rtt min/avg/max/mdev = 45.420/69.654/110.919/13.757 ms Iperf VoIP simulation results [ 3] 0.0-60.0 sec 1.41 MBytes 198 Kbits/sec 3.004 ms 276/25000 (1.1%) [ 3] 0.0-60.0 sec 1 datagrams received out-of-order Summary: 70 ms delay, 14 ms jitter, 1.1% packet loss, and essentially no packet reordering. Again, many thanks! Regards, Matt -----Original Message----- From: Andy Furniss [mailto:adf.lists@xxxxxxxxx] Sent: Thursday, May 23, 2013 1:45 PM To: Matthew Fox Cc: 'lartc@xxxxxxxxxxxxxxx' Subject: Re: High latency on HTB class with small packets/high packet rates Matthew Fox wrote: > Thanks. > > I didn't mention that the rates were already conservative. Our product has an overhead option and for this customer, it was already set to over 20% (ie if we think we have 10 Mbps on our lines, then the qdisc rate would actually only be 8 Mbps). > > After some further testing, I was able to get consistently good latency under high UDP packet rates if I just increased that overhead to 40%, which I've never had to do before. It seems like this might be caused by the connections we're using being backed by ATM and requiring to support high rates of small packets, which is an unusual scenario for us- more frequently we'd need to support ATM OR high rates of small packets, but not both. > > I'm satisfied with blaming ATM. If anyone else has a better idea, I'd be happy to hear. In theory you can do atm overheads perfectly see man tc-stab In practice you need to also know what fixed overheads there are in addition to atm and the nature of the device that you are actually shaping on affects the fixed overhead. For example, when I had adsl I connected using pppoa/vc mux which gave a fixed overhead of 10 bytes over ip. When shaping on ppp directly (PCI dsl card) this was the number to use - but when shaping on eth (connected to external dsl modem/router) tc already sees packets as IP +14 so -4 was the overhead to use. -- To unsubscribe from this list: send the line "unsubscribe lartc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html