The qdisc may not be the only queue in your system, I have had bad qdisc experiences because of another waiting queue after the qdisc queue, and the jam is sadly at this point: traffic ---> QDISC ----> BIG JAM ----> ppp link If the qdisc can transmit freely, it does not "feel" the jam that is afterwards and lets low priority pass because there is no problem. A solution, not perfect because it wastes a little bandwidth is to "shape" your traffic before your jam. For this, you can use the hfsc: I have made a little tool that can monitor the qdiscs, a hfsc script is provided with it, if you have some time, you can try to make it run, it is at http://clownix.net Regards Vincent Perrier Le mardi 17 novembre 2009 à 16:29 -0800, David L a écrit : > On Tue, Nov 17, 2009 at 3:11 PM, clownix wrote: > > Try the following: > > > > tc qdisc del root dev eth0 > > tc qdisc add dev eth0 root handle 1: prio > > tc filter add dev eth0 parent 1:0 protocol all u32 match u32 0x00000000 > > 0x00000000 at 0 flowid 1:3 > > tc filter add dev eth0 parent 1:0 protocol all u32 match u32 0x00E00000 > > 0x00E00000 at 0 flowid 1:1 > > > > > > ping -Q 255 192.168.1.1 > > tc -s class ls dev eth0 > > > > Thanks for your response. > > I tried this on both sides of the ppp link and I see that the filter > is categorizing the pings differently from the miscellaneous http traffic: > > tc -s class ls dev ppp0 > class prio 1:1 parent 1: > Sent 5124 bytes 61 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > class prio 1:2 parent 1: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > class prio 1:3 parent 1: > Sent 1144196 bytes 4119 pkt (dropped 68, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > > > However, the ping statistics are a lot worse when the http traffic is active: > > 60 packets transmitted, 60 received, 0% packet loss, time 59057ms > rtt min/avg/max/mdev = 5.828/65.577/404.738/90.916 ms > > than when it is inactive: > > 60 packets transmitted, 60 received, 0% packet loss, time 59117ms > rtt min/avg/max/mdev = 5.931/6.747/7.418/0.356 ms > > > 404 msecs maximum ping time when I had http traffic active > versus 7 msecs when it wasn't active. 400 msec difference > corresponds to about 4600 bytes over the 115200 ppp serial > link. The MTU was set to 500 bytes, so I don't understand > where that time is coming from if the pings are being queued > in preference to the http traffic. I'd expect the maximum ping > time to be about 50 msec, not 400 msec. What am I missing? > > Thanks, > > David > > > > > > Le mardi 17 novembre 2009 à 14:14 -0800, David L a écrit : > >> Hi, > >> > >> I need to prioritize data sent on a socket over a ppp link > >> so it is transmitted before some other data sharing that link. > >> I googled around for a few days and I thought I understood > >> how I might go about doing this, but my attempts have > >> failed. > >> > >> I thought the default qdisc (pfifo_fast) would prioritize data > >> flagged as "lowdelay" by putting it in a different band that has > >> preferential queuing priority over data in other bands. I > >> attempted to configure the socket like this: > >> > >> int lowdelay = IPTOS_LOWDELAY; > >> return setsockopt(servFd_, IPPROTO_IP, IP_TOS, > >> (void *)&lowdelay, sizeof(lowdelay)); > >> > >> > -- > To unsubscribe from this list: send the line "unsubscribe linux-net" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html