Re: [LARTC] SFQ buckets/extensions

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 5 Jun 2002, Don Cohen wrote:

> Alexander Atanasov writes:
>  > >  > 	Plaing with it gives interesting results:  
>  > >  > 	higher depth -> makes flows equal slower
>  > >  > 	small depth  -> makes flows equal faster
>  > >  > 	limit kills big delays when set at about 75-85% of depth.
>  > > 
>  > > I don't understand what these last three lines mean.  Could you
>  > > explain?
>  > 
>  > 	depth is how much packets which are queued on a row of the
>  > hash table. If you have large queues (higher depth) sfq reacts slower when
>  > a new flow appears (it has to do more work to make queue lengths equal
>  > ). When you have short queues it reacts faster, so adjusting depth
>  > to your bandwidth and traffic type can make it do better work.
>  > I set bounded cbq class 320kbits and esfq with dst hash:
>  > 	Start an upload - it gets 40KB
>  > 	Start second one - it should get 20KB asap to be fair.
>  > With depth 128 it would take it let's say 6 sec. to make both 20KB, with
>  > depth 64 about 3sec - drop packets early with shorter queue.
>  > (i've to make some exact measurements since this is just an example
>  > and may not be correct). 
> I don't see why that should be the case.  And I don't recall ever
> observing it.  This adaptation time should be practically zero.
> There's no work in making the queues equal.

	When queues get full sfq have to drop packets and it must
send less(delay) or drop on the faster flows which gets a bigger queues to
slow down them. That's the work i mean. And it really depends on the
subflow depth.

> (Let's use the word queue to mean the whole SFQ and subqueue for the
> part sharing a hash index.)
> If you have, say, 100 packets in one subqueue and 10 in another they're
> already sharing the bandwidth 50-50. 

	No, that means that the flow with 100 packets is using about
10 times more bandwidth. To be fair both must have lenght around 50.

> 
>  > 	limit sets a threshold on queued packets - if a packet exceeds
>  > it's dropped so delay is smaller, but when it tries to make flows equal it
>  > counts depth, not limit. With above example depth 128 and limit 100:
>  > 	When first upload enqueue 100 packets sfq starts to drop,
>  > but goal to make flows equal is 64 packets in queue. Flow doesn't get
>  > the 28 packets which are to be enqueued and delayed for a long time and
>  > probably dropped when recived.
> I disagree that the goal is to make the subqueues the same length.
> The goal is to serve them with the same bandwidth (as long as they
> don't become empty.)

	I agree that the goal is to serve the same bandwidth, but
bandwidth is served with packets which carry a data of X bytes up to MTU,
so i consider packets as the base unit. ( look at the example as both ftp 
flows send the same sized packets with the same data at the same rate).  
You have:
	enqueue -> put in the subflow   - len++
	dequeue -> get from the subflow - len--

So when dequeues ocure slower than enqueues they fill up - packets
are delayed and at some point get dropped. So the problem is when
your queues get full with packets - how to dequeue to be fair to flows
which are identified with the hash. SFQ tries to make equal queues for the
flows to do this.

> Queue length depends on how many packets each download is willing to
> send without an ack.  If one is willing to send 100 and the other is
> willing to send 10, then the subqueues will likely be length 100 and
> 10, but each will still get the same bandwidth.  Without window

	That's true if you can send 110 packets and not exceed your
bandwidth , trouble comes when you can send 100 , how to deal with with
the 10 that exceed. 

> scaling the max window size is 64K which is only about 45 packets,
> so it's not really normal to have 100 packets in a subqueue.

	It's normal if you have congested links, it's not good
in anyway :( But that's why qos is to avoid and handle this.
You have 45 packets per one flow but with 3 hash collisions you get above
the 128 packets limit and drop. esfq sloves this you can tune
number of subflows and their lengths.

-- 
have fun,
alex


_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux