[LARTC] SFQ buckets/extensions

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




 > ... What if SFQ were to start with a minimal number of buckets, and
 > track how 'deep' each bucket was, then go to a larger number of bits
 > (2/4 at a time?) if the buckets hit a certain depth?  Theoretically,
 > this would mean that 'fairness' would be achieved more often in current
 > collision situations but that a smaller number of buckets would be
 > necessary to achieve fairness in currently low-collision situations.
 > 
 > I haven't looked at the SFQ code in a while, so I don't know how much
 > benefit this would be in terms of processing time, or even how expensive
 > it would be to change hash sizes on the fly, but at a certain level of
 > resolution (+/- 2-4 bits), the changes wouldn't be terribly frequent
 > anyway.

A few reactions:
- The only runtime cost of lots of buckets is a small amount of
storage for each bucket.  Allocating buckets at runtime also
introduces the problem that you could run out of space.
- There's no advantage to having many more buckets that the number 
of packets you're willing to queue, which is typically only on the
order of a few hundred.

==== extensions
 > And all the discussions tend to lead to the conclusion that there should
 > be an sfq option (when the queue is created) for:
 > 	a) how big the hash is
 > 	b) whether to take into account source ports or not
 > 	c) whether to take into account destination ports or not
 > 	d) etc. :)
 > 
 > Maybe someone who's written a qdisc would feel up to this?

I've been hoping to get to it, since I have other stuff I'd like to
incorporate into a new sfq version.  

 > From: Alexander Atanasov <alex@ssi.bg>
 > 	I've done some in this direction , probably needs more work, and
 > it's poorly tested - expect b00ms ;)
 > 	
 > 	This adds a new qdisc for now - esfq which is a 100% clone of
 > original sfq.
 > 	- You can set all sfq parameters: hash table size, queue depths,
 > queue limits.
 > 	- You can choose from 3 hash types: original(classic), dst ip, src
 > ip.
 > 	Things to consider: perturbation with dst and src hashes is not
 > good IMHO, you can try with perturb 0 if it couses trouble.
 > 
 > 	Please, see the attached files.
 > 
 > 	Plaing with it gives interesting results:  
 > 	higher depth -> makes flows equal slower
 > 	small depth  -> makes flows equal faster
 > 	limit kills big delays when set at about 75-85% of depth.

I don't understand what these last three lines mean.  Could you
explain?

 > 
 > 	Needs testings and mesurements - that's why i made it
 > separate qdisc and not a patch over sfq, i wanted to compare both.
 > 
 > 	Any feedback good or bad is welcome. 

I'll send you my current module, also a variant of SFQ.  It contains
doc that I think is worth including, also changes some of the code to
be more understandable, separates the number of packets allowed in the
queue from the number of buckets, supports the time limit (discussed
in earlier messages), controls these things via /proc, maybe a few
other things I'm forgetting.  This version does not support hashing on
different properties of the packet, cause it uses a totally different
criterion for identifying "subclasses" of traffic.  You can discard
that and restore the sfq hash with your modifications.  I think (hope)
these changes are pretty much independent.
_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux