Re: [LARTC] SFQ buckets/extensions

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alexander Atanasov writes:
 > 	At first look - i think i've have to incorporate my changes into
 > your work. I've not done much just added hashes and unlocked what
 > Alexey Kuznetsov did.
Not quite that simple.  You have to throw out about half of my file,
mostly the last third or so, which replaces the hash function, plus 
most of the /proc stuff, and probably a lot of other little pieces
scattered here and there.
I now recall a few other things - I support configuration of sevice 
weights for different subqueues, which makes no sense for sfq, also
I record the amount of service (bytes and packets) per subqueue and
report these to the tc -s -d stuff, which also makes no sense for sfq.
After removing all that stuff you then have to restore the hash.

 > >  > 	Plaing with it gives interesting results:  
 > >  > 	higher depth -> makes flows equal slower
 > >  > 	small depth  -> makes flows equal faster
 > >  > 	limit kills big delays when set at about 75-85% of depth.
 > > 
 > > I don't understand what these last three lines mean.  Could you
 > > explain?
 > 
 > 	depth is how much packets which are queued on a row of the
 > hash table. If you have large queues (higher depth) sfq reacts slower when
 > a new flow appears (it has to do more work to make queue lengths equal
 > ). When you have short queues it reacts faster, so adjusting depth
 > to your bandwidth and traffic type can make it do better work.
 > I set bounded cbq class 320kbits and esfq with dst hash:
 > 	Start an upload - it gets 40KB
 > 	Start second one - it should get 20KB asap to be fair.
 > With depth 128 it would take it let's say 6 sec. to make both 20KB, with
 > depth 64 about 3sec - drop packets early with shorter queue.
 > (i've to make some exact measurements since this is just an example
 > and may not be correct). 
I don't see why that should be the case.  And I don't recall ever
observing it.  This adaptation time should be practically zero.
There's no work in making the queues equal.
(Let's use the word queue to mean the whole SFQ and subqueue for the
part sharing a hash index.)
If you have, say, 100 packets in one subqueue and 10 in another they're
already sharing the bandwidth 50-50. 

 > 	limit sets a threshold on queued packets - if a packet exceeds
 > it's dropped so delay is smaller, but when it tries to make flows equal it
 > counts depth, not limit. With above example depth 128 and limit 100:
 > 	When first upload enqueue 100 packets sfq starts to drop,
 > but goal to make flows equal is 64 packets in queue. Flow doesn't get
 > the 28 packets which are to be enqueued and delayed for a long time and
 > probably dropped when recived.
I disagree that the goal is to make the subqueues the same length.
The goal is to serve them with the same bandwidth (as long as they
don't become empty.)
Queue length depends on how many packets each download is willing to
send without an ack.  If one is willing to send 100 and the other is
willing to send 10, then the subqueues will likely be length 100 and
10, but each will still get the same bandwidth.  Without window
scaling the max window size is 64K which is only about 45 packets,
so it's not really normal to have 100 packets in a subqueue.

_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux