Thanks. I didn't mention that the rates were already conservative. Our product has an overhead option and for this customer, it was already set to over 20% (ie if we think we have 10 Mbps on our lines, then the qdisc rate would actually only be 8 Mbps). After some further testing, I was able to get consistently good latency under high UDP packet rates if I just increased that overhead to 40%, which I've never had to do before. It seems like this might be caused by the connections we're using being backed by ATM and requiring to support high rates of small packets, which is an unusual scenario for us- more frequently we'd need to support ATM OR high rates of small packets, but not both. I'm satisfied with blaming ATM. If anyone else has a better idea, I'd be happy to hear. Regards, Matt -----Original Message----- From: oldford@xxxxxxxxxxxxxxxxxx [mailto:oldford@xxxxxxxxxxxxxxxxxx] Sent: Wednesday, May 22, 2013 3:31 PM To: Matthew Fox Subject: RE: High latency on HTB class with small packets/high packet rates Hmm, i don't have a direct answer, but when i run into stuff i don't understand i run this command line: watch 'tc -s class show dev tun0 ? grep rate' Replace the question mark with a pipe Then you can see exactly where your data is going and how your bandwidth is being allocated. Since you are allocating near 100% of your bandwidth is it possible that the larger volume of small packets create enough overhead to max out your line? I leave the last 50 kb/s unallocated to prevent saturation. Sent from my Windows(r) phone. -----Original Message----- From: Matthew Fox <Matt.Fox@xxxxxxxxxxxxxxx> Sent: Wednesday, May 22, 2013 4:24 PM To: lartc@xxxxxxxxxxxxxxx <lartc@xxxxxxxxxxxxxxx> Subject: High latency on HTB class with small packets/high packet rates Hi folks, I have a HTB qdisc with four HTB classes. One class is for low-latency stuff, and given a rate of 70% of the parent, or about 1065 Kbps in my case. The rate on the other three classes is 10% of the parent, or 152 Kbps, each. The qdisc rate is 1521 Kbps. Here's my scenario: 1. When I do a bulk download and upload (classified as one of the 10% classes) and a ping (ping is classified as the 70% class), the ping test gives great, low latency results- about 30 or 40 ms. The bulk upload goes at about 1500 Kbps or so since it can borrow from every other class. 2. When in addition to the upload and ping, I do an iperf UDP upload with packet size 200 bytes at 200 Kbps, the ping still gives great results. In this case the low-latency class has about 125 packets/sec and 234 Kbps traffic. 3. When the iperf UDP packet size is changed to 60 bytes at 200 Kbps, so that it does ~400 packets/sec and 280 Kbps (due to additional overhead on the smaller packets), the latency on my ping increases to 200-500 ms. 4. When the iperf UDP packet size is again 200 Kbps, and the rate is 630 Kbps, so that it does about ~400 packets/sec and 707 Kbps, the ping latency is again 500 ms or so. In all cases, the bulk transfers are being restricted to what appears to be the right rate based on the rate of the UDP upload. So to summarize, when there is a bulk upload going and 1065 Kbps is dedicated to the high-priority class: - a 200 Kbps, 125 packet/second high-priority transfer is started, the latency remains low - a 200 Kbps, 400 packet/second high-priority transfer is started, the latency becomes high - a 630 Kbps, 400 packet/second high-priority transfer is started, the latency becomes high So my question is: why is the latency increasing on my class with 1065 Kbps reserved when its packet rate is about 400/sec, even though the total traffic rate on the class is between 280 and 700 Kbps- much less than 1065 Kbps? Why does the latency remain low when the UDP packet rate is lower and the rate is about the same, 230 Kbps? If it matters, the qdisc is on a tun device, the packets are being classified with iptables, and DSL line(s) run over ATM, and my host is Debian Squeeze with kernel 2.6.32. Below are my qdisc and class configuration. Many thanks, Matt Fox root@...:~# tc -s qdisc show dev tun85 qdisc htb 130: root refcnt 2 r2q 10 default 510 direct_packets_stat 0 Sent 131314537 bytes 358445 pkt (dropped 3676, overlimits 311533 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 qdisc sfq 8019: parent 130:470 limit 127p quantum 1452b perturb 10sec Sent 16833120 bytes 168707 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 qdisc sfq 801a: parent 130:490 limit 127p quantum 1452b perturb 10sec Sent 3221738 bytes 59136 pkt (dropped 6, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 qdisc sfq 801b: parent 130:510 limit 127p quantum 1452b perturb 10sec Sent 111259679 bytes 130602 pkt (dropped 3670, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 qdisc sfq 801c: parent 130:530 limit 127p quantum 1452b perturb 10sec Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 root@...:~# tc -s class show dev tun85 class htb 130:1 root rate 1521Kbit ceil 1521Kbit burst 1599b cburst 1599b Sent 131232629 bytes 358231 pkt (dropped 0, overlimits 0 requeues 0) rate 1432bit 0pps backlog 0b 0p requeues 0 lended: 67290 borrowed: 0 giants: 0 tokens: 67719 ctokens: 67719 class htb 130:470 parent 130:1 leaf 8019: prio 0 rate 1065Kbit ceil 1521Kbit burst 1599b cburst 1599b Sent 16824044 bytes 168691 pkt (dropped 0, overlimits 0 requeues 0) rate 1400bit 0pps backlog 0b 0p requeues 0 lended: 168570 borrowed: 121 giants: 0 tokens: 96750 ctokens: 67719 class htb 130:510 parent 130:1 leaf 801b: prio 2 rate 152096bit ceil 1521Kbit burst 1599b cburst 1599b Sent 111219641 bytes 130404 pkt (dropped 3670, overlimits 0 requeues 0) rate 32bit 0pps backlog 0b 0p requeues 0 lended: 67457 borrowed: 62947 giants: 0 tokens: 1242641 ctokens: 124266 class htb 130:490 parent 130:1 leaf 801a: prio 1 rate 152096bit ceil 1521Kbit burst 1599b cburst 1599b Sent 3221738 bytes 59136 pkt (dropped 6, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 lended: 54914 borrowed: 4222 giants: 0 tokens: 1262360 ctokens: 126234 class htb 130:530 parent 130:1 leaf 801c: prio 3 rate 152096bit ceil 1521Kbit burst 1599b cburst 1599b Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 lended: 0 borrowed: 0 giants: 0 tokens: 1314953 ctokens: 131484-- To unsubscribe from this list: send the line "unsubscribe lartc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe lartc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html