Re: newbie queuing priority question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I do not know, but for a normal ethernet card, the well 
done kernel driver must tell the upper layer of the kernel 
that there is a jam with the call netif_stop_queue(netdev); 
and this driver must have only a handfull of buffer before it
considers there is a jam.
Then the packets are stored in the qdiscs where the priorities
play their roles.

For ppp, I have had a case where ppp was associated to a "false"
tty (created with openpty), this tty in the kernel lead to a 
user process which finaly lead to a serial driver in the kernel.
It is clear in this kind of achitecture that the priorities at
ppp interface level had no impact.
I suppose that for each case the reason it does not work is different
but if you have a 4K buffer, it will fill with low priority messages
mixed with high.

If you know your exact throughput, you must shape at qdisc level 
at that throughput so that the traffic stays fluid afterwards.



Le mercredi 18 novembre 2009 à 09:30 -0800, David L a écrit :
> On Tue, Nov 17, 2009 at 5:00 PM, clownix  wrote:
> > The qdisc may not be the only queue in your system, I have had
> > bad qdisc experiences because of another waiting queue
> > after the qdisc queue, and the jam is sadly at this point:
> >
> > traffic ---> QDISC ----> BIG JAM ----> ppp link
> 
> Is it possible this mystery queue is a serial port
> kernel driver buffer for the serial port that ppp is
> connected to?  I think the serial port drivers have 4k
> buffers which is pretty close to the ~4600 bytes
> that seem to be getting buffered up somewhere.
> 
> Cheers...
> 
>             Dave
> 
> >
> > If the qdisc can transmit freely, it does not "feel" the jam
> > that is afterwards and lets low priority pass because there is
> > no problem.
> >
> > A solution, not perfect because it wastes a little bandwidth
> > is to "shape" your traffic before your jam.
> >
> > For this, you can use the hfsc:
> > I have made a little tool that can monitor the qdiscs, a hfsc script
> > is provided with it, if you have some time, you can try to make it run,
> > it is at http://clownix.net
> >
> > Regards
> > Vincent Perrier
> >
> >
> >
> > Le mardi 17 novembre 2009 à 16:29 -0800, David L a écrit :
> >> On Tue, Nov 17, 2009 at 3:11 PM, clownix wrote:
> >> > Try the following:
> >> >
> >> > tc qdisc del root dev eth0
> >> > tc qdisc add dev eth0 root handle 1: prio
> >> > tc filter add dev eth0 parent 1:0 protocol all u32 match u32 0x00000000
> >> > 0x00000000 at 0 flowid 1:3
> >> > tc filter add dev eth0 parent 1:0 protocol all u32 match u32 0x00E00000
> >> > 0x00E00000 at 0 flowid 1:1
> >> >
> >> >
> >> > ping -Q 255 192.168.1.1
> >> > tc -s class ls dev eth0
> >> >
> >>
> >> Thanks for your response.
> >>
> >> I tried this on both sides of the ppp link and I see that the filter
> >> is categorizing the pings differently from the miscellaneous http traffic:
> >>
> >>  tc -s class ls dev ppp0
> >> class prio 1:1 parent 1:
> >>  Sent 5124 bytes 61 pkt (dropped 0, overlimits 0 requeues 0)
> >>  backlog 0b 0p requeues 0
> >> class prio 1:2 parent 1:
> >>  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> >>  backlog 0b 0p requeues 0
> >> class prio 1:3 parent 1:
> >>  Sent 1144196 bytes 4119 pkt (dropped 68, overlimits 0 requeues 0)
> >>  backlog 0b 0p requeues 0
> >>
> >>
> >> However, the ping statistics are a lot worse when the http traffic is active:
> >>
> >> 60 packets transmitted, 60 received, 0% packet loss, time 59057ms
> >> rtt min/avg/max/mdev = 5.828/65.577/404.738/90.916 ms
> >>
> >> than when it is inactive:
> >>
> >> 60 packets transmitted, 60 received, 0% packet loss, time 59117ms
> >> rtt min/avg/max/mdev = 5.931/6.747/7.418/0.356 ms
> >>
> >>
> >> 404 msecs maximum ping time when I had http traffic active
> >> versus 7 msecs when it wasn't active.  400 msec difference
> >> corresponds to about 4600 bytes over the 115200 ppp serial
> >> link.  The MTU was set to 500 bytes, so I don't understand
> >> where that time is coming from if the pings are being queued
> >> in preference to the http traffic.  I'd expect the maximum ping
> >> time to be about 50 msec, not 400 msec.  What am I missing?
> >>
> >> Thanks,
> >>
> >>             David
> >>
> >>
> >> >
> >> > Le mardi 17 novembre 2009 à 14:14 -0800, David L a écrit :
> >> >> Hi,
> >> >>
> >> >> I need to prioritize data sent on a socket over a ppp link
> >> >> so it is transmitted before some other data sharing that link.
> >> >> I googled around for a few days and I thought I understood
> >> >> how I might go about doing this, but my attempts have
> >> >> failed.
> >> >>
> >> >> I thought the default qdisc (pfifo_fast) would prioritize data
> >> >> flagged as "lowdelay" by putting it in a different band that has
> >> >> preferential queuing priority over data in other bands.  I
> >> >> attempted to configure the socket like this:
> >> >>
> >> >>   int lowdelay = IPTOS_LOWDELAY;
> >> >>   return setsockopt(servFd_, IPPROTO_IP, IP_TOS,
> >> >>                     (void *)&lowdelay, sizeof(lowdelay));
> >> >>
> >> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-net" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-net" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux