Re: PQ questions

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 

On Fri, 2007-06-15 at 17:13 +0800, Salim S I wrote: 
> I tested on wireless link. It could give a maximum of 45Mbps. And I sent
> 30Mbps of both low prio and high prio traffic. Total of 60Mbps.

Do you mean to say that your wireless link can transmit at 45Mbps?
If that's what you meant, what I meant to say is that if you generate
almost (or more than) 45Mbps of high prio traffic than there is nothing
(or almost nothing) left for the low prio traffic.

When you forward the above traffic (as opposed to when you generate it
locally) there are other factors to take into account that can change
the overall behavior.
For example, for each CPU there is one ingress queue that is shared
by all ingress traffic that is received by interfaces whose driver
does not use NAPI. These CPU queues are traversed before the ingress
queueing disciplines and they have nothing to do with Traffic Control.
It is possible therefore that under heavy load the low prio traffic
fills in a significant portion of such CPU queues and reduces the
amount of high prio traffic that reaches the egress queueing
discipline (leaving therefore more chances to the low priority traffic
to be scheduled).

> My test was done with UDP, using tcpdump. When I increased the bandwidth
> to 40Mbps each, the high priority class got lesser bandwidth. (maybe the
> effect of the known issue that large amount of low prio traffic can
> starve high prio traffic)

Possible. See my comment above.

Regards
/Christian
[ http://benve.info ]


> > -----Original Message-----
> > From: lartc-bounces@xxxxxxxxxxxxxxx
> [mailto:lartc-bounces@xxxxxxxxxxxxxxx]
> > On Behalf Of Christian Benvenuti
> > Sent: Friday, June 15, 2007 4:16 PM
> > To: lartc@xxxxxxxxxxxxxxx
> > Subject:  Re: PQ questions
> > 
> > Hi,
> >   a class is starved only if those with higher priority are
> > always (of pretty often) backlogged and do not give the lower
> > priority classes a chance to transmit.
> > Therefore, if you transmit at a rate smaller than your CPU/s and
> > NIC/s can handle you will not experience any starving.
> > 
> > For example, if you generate 50Mbit traffic on a 100Mbit NIC
> > it is likely that you won't see any starving (unless your system is
> > not able to handle 50Mbit traffic because of a complex TC or
> > iptables configuration that consumes lot of CPU).
> > 
> > Regards
> > /Christian
> > [ http://benve.info ]
> > 
> > On Fri, 2007-06-15 at 15:46 +0800, Salim S I wrote:
> > > Slightly offtopic... Has anyone really experienced starving of low
> > > priority traffic with PRIO qdisc?
> > > In my setup, I never achieved that, though I also wanted exactly
> that
> > > situation. I gave both the classes same amount of traffic at the
> same
> > > time. High prio got more bandwidth, but no starvation, even after I
> sent
> > > more traffic than the link capacity.
> > >
> > > > -----Original Message-----
> > > > From: lartc-bounces@xxxxxxxxxxxxxxx
> > > [mailto:lartc-bounces@xxxxxxxxxxxxxxx]
> > > > On Behalf Of Christian Benvenuti
> > > > Sent: Friday, June 15, 2007 3:32 PM
> > > > To: lartc@xxxxxxxxxxxxxxx
> > > > Subject:  Re: PQ questions
> > > >
> > > > Hi,
> > > >
> > > > > > Your config does not prevent an higher priority class from
> > > starving
> > > > > > a lower priority class.
> > > > >
> > > > > Exactly. That is requirement.
> > > >
> > > > OK
> > > >
> > > > > Those stats are nice to have, but the ones I must have are for
> how
> > > many
> > > > > bytes/packets are enqueued at whatever time I check the queues.
> > > >
> > > > That information is there. Here is an example:
> > > > (b=bytes p=packets)
> > > >
> > > > #tc -s -d qdisc list dev eth1
> > > >
> > > > qdisc prio 1:  root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1
> 1
> > > >   Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues
> 0)
> > > >   rate 0bit 0pps backlog 0b 35p requeues 0
> > > >                          +-> This field is not initialized for
> this
> > > >                              qdisc type
> > > > qdisc pfifo 10:  parent 1:1 limit 1000p
> > > >   Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> > > >  rate 0bit 0pps backlog 0b 0p requeues 0
> > > >                 ^^^^^^^^^^^^^
> > > > qdisc pfifo 20: parent 1:2 limit 1000p
> > > >   Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues
> 0)
> > > >  rate 0bit 0pps backlog 50470b 35p requeues 0
> > > >                 ^^^^^^^^^^^^^^^^^^
> > > > qdisc pfifo 30: parent 1:3 limit 1000p
> > > >   Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> > > >   rate 0bit 0pps backlog 0b 0p requeues 0
> > > >                  ^^^^^^^^^^^^^
> > > >
> > > > > I have tried to configure PQ to have two queues per filter with
> no
> > > > success.
> > > >
> > > > What do you mean?
> > > >
> > > > > Is it even possible to have (what I'll call) hierarchical PQ? I
> have
> > > yet
> > > > to
> > > > > find it.
> > > >
> > > > Something like this?
> > > >
> > > > tc qdisc add dev eth1 handle 1: root prio
> > > > tc qdisc add dev eth1 parent 1:1 handle 10 prio
> > > > tc qdisc add dev eth1 parent 1:2 handle 20 prio
> > > > tc qdisc add dev eth1 parent 1:3 handle 30 prio
> > > >
> > > > Regards
> > > > /Christian
> > > > [ http://benve.info ]


_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux