On Wednesday 2015-03-11 20:10, Glen Miner wrote: > >The ugly: as the number if iptables rules increases the time required to >modify the table grows. At first this is linear but it starts to go >super-linear after about 16,000 rules or so; I assume it's blowing out a CPU >cache. Quite possibly. And even if one operating system's implementation can do 16000, and the other 20000 before the turning point is reached, be aware that the limiting factor is generally capacity. The problem is not specific to networking. The same can be observed when there is not enough RAM for an arbitrary task and pages get swapped out to disk. To date, the only solution was to "add more capacity" or to reduce/split the workload, as there is only so much vertical scaling one can do. >Will nftables scale any better? iptc: Processes entire rulesets. Many non-present options take up space anyway (as "::/0"). Sent to kernel as-is as one huge chunk. "One huge" allocation (per NUMA node) to copy it into the kernel. Not a lot of parsing needed. Ruleset is linear. nft: Single rules can be updated. The \0 space wastage is elided. Sent to kernel in Netlink packets (max. 64K by now, I think), so there is a forth-and-back of syscalls. Kernel needs to deserialize. (Last time I checked, rules are held in a linked list like in BSD *pf, therefore many small allocations all over the place. The word on the street is that nft's expressiveness allows you to have fewer rules. Whether it can really be exploited to that level ultimately depends on the ruleset and how much conditions you can aggregate, innit. In other words, if you really want to know, measure. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html