On 24.11, Pablo Neira Ayuso wrote: > On Tue, Nov 24, 2015 at 10:25:54AM +0000, Patrick McHardy wrote: > > Tracing might be a long running operation. The cache can go out of sync, might > > be better to do a lookup on demand. > > We'll need to handle generations in that approach. The kernel lookup > per trace will be expensive. > > Why not just keep the cache in userspace and update it only when > needed? We can easily detect when we get out of sync via ENOBUFS. > > > Right now the caching infrastrucure has quite a lot of problems and I'd prefer > > to get them fixed before we base new things on it. > > The caching infrastructure only needs to have a mode to be populated > via set information, then infer existing tables from handles as you > indicated. > > What other problems you see with it? Well I keep running into problems with it. We already discussed a few, we're dumping way to much information that we don't need and we're making nft require root even for unpriviledged operations and just testing ruleset syntax. We're basing errors on a cache that might not be up to date. When I list the bridge table, for some reason it lists *all* tables of all families. We're basically doing full dumps of everything in many cases. This will be absolutely killing performance with a big ruleset. AFAIK (did not test) we're only listing sets for the family of the first command, then set cache_initializer to true and skip further updates. When the ruleset refers to multiple families, the contents will not be present but expected. It basically seems like the big hammer approach + some bugs instead of selectively getting what we need when we need it and making sure its up to date, at least before generating errors based on it. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html