Re: [nft PATCH] make cache persistent if local entries were added

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Oct 20, 2018 at 12:35:11PM +0200, Pablo Neira Ayuso wrote:
> On Sat, Oct 20, 2018 at 12:24:06PM +0200, Phil Sutter wrote:
> > JSON API as well as nft CLI allow to run multiple commands within the
> > same batch. Depending on the local cache state, a later command may
> > trigger a cache update which removes the local entry added by an earlier
> > command.
> > 
> > To overcome this, introduce a special genid value to set when local
> > entries are added to the cache which blocks all cache updates until the
> > batch has been committed to the kernel.
> 
> Probably we can make sure we issue a cache_update() by when we call
> chain_add_hash(), before adding the local object to the cache, then
> lock it? Or add assert() to _add_hash() functions to make sure cache
> is up to date? We need a valid cache before we can lock it, right?

The problem is that a batch commit outdates the local cache. An example
showing the problem is:

| % sudo nft -i
| nft> list ruleset
| nft> add table ip t
| nft> add table ip t2; add chain ip t2 c
| Error: Could not process rule: No such file or directory
| add table ip t2; add chain ip t2 c
|                               ^^

With 'list ruleset', I just ensure cache->genid is not zero. The first
'add table' command increments kernel's genid. The 'add chain' command
triggers a cache update which removes table t2 from it.

> Do you have several examples that are triggering the problem that we
> can place in the test regression infrastructure?

I'll try to collect a few and will send a test case so we have something
to validate against.

Thanks, Phil



[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux