Re: [LARTC] SFQ buckets

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Probably the more cheap way is to simply increase no of buckets.
With adaptive bin number you will probably end up with linear
hashing which is still fast but increasing no of bucket and keep
single level search will probably be yet faster.
Just my opinion ...
devik

On Tue, 4 Jun 2002, Michael T. Babcock wrote:

> Just a thought, after talking to a friend who's doing his master's work
> on RED and other queueing algorithms ...
>
> ... What if SFQ were to start with a minimal number of buckets, and
> track how 'deep' each bucket was, then go to a larger number of bits
> (2/4 at a time?) if the buckets hit a certain depth?  Theoretically,
> this would mean that 'fairness' would be achieved more often in current
> collision situations but that a smaller number of buckets would be
> necessary to achieve fairness in currently low-collision situations.
>
> I haven't looked at the SFQ code in a while, so I don't know how much
> benefit this would be in terms of processing time, or even how expensive
> it would be to change hash sizes on the fly, but at a certain level of
> resolution (+/- 2-4 bits), the changes wouldn't be terribly frequent
> anyway.
>
> I've always been a bit of a fan of self-tuning algorithms anyway :)
> --
> Michael T. Babcock
> CTO, FibreSpeed Ltd.
>
> _______________________________________________
> LARTC mailing list / LARTC@mailman.ds9a.nl
> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
>
>

_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux