On Thu, May 20, 2010 at 7:33 PM, Patrick McHardy <kaber@xxxxxxxxx> wrote: > Eric Dumazet wrote: >> Le jeudi 20 mai 2010 à 18:21 +0530, Anand Raj Manickam a écrit : >>> Hi, >>> Is there any performance bench mark on conntrack response to 1 million >>> conntrack entries in the conntrack table. >>> Since conntrack uses Hashing to lookup the entries i had some doubts >>> on the scalability. Can someone shed some light please? >> >> Question is not about number of conntrack entries in hash table, but >> number of inserts and deletes per second. >> >> For persistent connections, if you use a hash table of one million >> slots, performance will be very good, since the chain length is small. >> Its scalable because each cpu can access conntrack table without locks, >> in parallel. > My understanding is that , the chances of persistent connections on Networks using internet is less. Suppose , if there are around 50,000 connections adds and 50,000 connection deletes on 1 million conncurrent conntrack entry table we have a scalabilty problem ? The reason why i m posting this question is on the ablity of hash tables on 1 million entries vs rb trees handling 1 million entries . > Actually the recommended hash table size is twice the number of > expected connections since each conntrack is hashed twice :) > So , if i m expecting (i m just expecting connections from users NOT on HELPERS/EXPECTATION) 1 million connections do i need to set the conntrack table to 2 million ? How much memory do we need to maintain 1 million connections ? The typical iptables/netfilter say about 32k connections for 512 MB RAM / 64k connections for greater than 1 GB. As per my understanding each conntrack entry is about 300 odd bytes ,assuming 310 bytes per conntrack entry , (310 * 1000000) roughly around 300 MB -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html