Re: Massive filtering

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ericr wrote:

My first thought for scaling up was to use the hash tables, and I am
feeling that the last line in lartc's document page "12.4. Hashing filters
for very fast massive filtering" which says "Note that this example could
be improved to the ideal case where each chain contains 1 filter!" is a
little misleading since no divisor above 256 works.  On first reading, I 'm
thinking, yeh, I'll just put a divisor of 16777216 and my problems are
solved... nope.. wrong answer.  I haven't even gotten to the point where I
issue 32 million filter rules to tc and see if it chokes.

The only solution in the case of thousands of rules is the u32 classifier with hashing filters. Unfortunately, divisor's upper limit is 256, and it's not appropriate for the practical tasks. From the other side, hashes with very large number of buckets (like 16777216, you said) can't be implemented, because they will require much more RAM than you can address.

I have similar task and some days ago started to work on patches for tc and u32 classifier that will allow to use large hashes (see my recent messages at linux-net@ mailing list archive). I'm a newbie in a Linux kernel and I can't complete this task fast. I think we should ask for help from experienced developers.
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux