Hi, On Thu, Oct 17, 2019 at 11:03:32AM +0200, Pablo Neira Ayuso wrote: > On Tue, Oct 15, 2019 at 01:41:44PM +0200, Phil Sutter wrote: > > Fourth try at caching optimizations implementation. > > > > Changes since v3: > > > > * Rebase onto current master after pushing the accepted initial three > > patches. > > * Avoid cache inconsistency in __nft_build_cache() if kernel ruleset > > changed since last call. > > I still hesitate with this cache approach. > > Can this deal with this scenario? Say you have a ruleset composed on N > rules. > > * Rule 1..M starts using generation X for the evaluation, they pass > OK. > > * Generation is bumped. > > * Rule M..N is evaluated with a diferent cache. > > So the ruleset evaluation is inconsistent itself since it is based on > different caches for each rule in the batch. Yes, that is possible. In a discussion with Florian back when he fixed for concurrent xtables-restore calls, consensus was: If you use --noflush and concurrent ruleset updates happen, you're screwed anyway. (Meaning, results are not predictable and we can't do anything about it.) In comparison with current code which just fetches full cache upon invocation of 'xtables-restore --noflush', problems might not be detected during evaluation but only later when kernel rejects the commands. Eventually, commands have to apply to the ruleset as it is after opening the transaction. If you cache everything first, you don't detect incompatible ruleset changes at all. If you cache multiple times, you may detect the incompatible changes while evaluating but the result is the same, just with different error messages. :) Cheers, Phil