RE: iptables at scale

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>The ugly: as the number if iptables rules increases the time required to
>>modify the table grows. At first this is linear but it starts to go
>>super-linear after about 16,000 rules or so; I assume it's blowing out a CPU
>>cache.
>
> Quite possibly. And even if one operating system's implementation can
> do 16000, and the other 20000 before the turning point is reached, be
> aware that the limiting factor is generally capacity. The problem is
> not specific to networking. The same can be observed when there is
> not enough RAM for an arbitrary task and pages get swapped out to
> disk. To date, the only solution was to "add more capacity" or to
> reduce/split the workload, as there is only so much vertical scaling
> one can do.

To be clear: judging by the scaling of kernel performance thus far the only bottleneck at this point is table modification. I've easily got 20x the headroom on the server as it is but am choked because I can't create rules fast enough. 

And yes -- we're ready to go wide -- but that has its own problems. I'd really like to find a way to get a lot better utilization per node. 

> nft: Single rules can be updated. The \0 space wastage is elided.
...
> In other words, if you really want to know, measure.

Ok thanks for the info -- I don't know enough of the state this package is in but I'll take a look. I'm not sure I can dump iptables out of my current system and drop in nftables instead but maybe I'll try tomorrow. If I get nftables working I'll definitely post scaling numbers.

-g 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux