Re: [RFC] nftables 0.9.8 -stable backports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-02-17, at 20:11:42 +0000, Jeremy Sowden wrote:
> On 2023-10-09, at 13:36:23 +0200, Pablo Neira Ayuso wrote:
> > This is a small batch offering fixes for nftables 0.9.8. It only
> > includes the fixes for the implicit chain regression in recent
> > kernels.
> > 
> > This is a few dependency patches that are missing in 0.9.8 are
> > required:
> > 
> >         3542e49cf539 ("evaluate: init cmd pointer for new on-stack context")
> >         a3ac2527724d ("src: split chain list in table")
> >         4e718641397c ("cache: rename chain_htable to cache_chain_ht")
> > 
> > a3ac2527724d is fixing an issue with the cache that is required by the
> > fixes. Then, the backport fixes for the implicit chain regression with
> > Linux -stable:
> > 
> >         3975430b12d9 ("src: expand table command before evaluation")
> >         27c753e4a8d4 ("rule: expand standalone chain that contains rules")
> >         784597a4ed63 ("rule: add helper function to expand chain rules into commands")
> > 
> > I tested with tests/shell at the time of the nftables 0.9.8 release
> > (*I did not use git HEAD tests/shell as I did for 1.0.6*).
> > 
> > I have kept back the backport of this patch intentionally:
> > 
> >         56c90a2dd2eb ("evaluate: expand sets and maps before evaluation")
> > 
> > this depends on the new src/interval.c code, in 0.9.8 overlap and
> > automerge come a later stage and cache is not updated incrementally,
> > I tried the tests coming in this patch and it works fine.
> > 
> > I did run a few more tests with rulesets that I have been collecting
> > from people that occasionally send them to me for my personal ruleset
> > repo.
> > 
> > I: results: [OK] 266 [FAILED] 0 [TOTAL] 266
> > 
> > This has been tested with latest Linux kernel 5.10 -stable.
> > 
> > I can still run a few more tests, I will get back to you if I find any
> > issue.
> > 
> > Let me know, thanks.
> 
> A new version of nftables containing these fixes was released as part of
> the Debian 11.9 point release, which happened a week ago.  Since then,
> we've had a couple of bug-reports:
> 
>   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1063690
>   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1063769
> 
> The gist of them is that if nft processes a file containing multiple
> table-blocks for the same table, and there is a set definition in one of
> the non-initial ones, e.g.:
> 
>   table inet t {
>   }
>   table inet t {
>     set s {
>       type inet_service
>       elements = { 42 }
>     }
>   }
> 
> it crashes with a seg-fault.
> 
> The bison parser creates two `CMD_ADD` commands and allocates two
> `struct table` objects (which I shall refer to as `t0` and `t1`).  When
> it creates the second command, it also allocates a `struct set` object,
> `s`, which it adds to `t1->sets`.  After the `CMD_ADD` commands for `t0`
> and `t1` have been expanded, when the new `CMD_ADD` command for `s` is
> evaluated, `set_evaluate` does this (evaluate.c, ll. 3686ff.):
> 
> 	table = table_lookup_global(ctx);
> 	if (table == NULL)
> 		return table_not_found(ctx);
> 
> and later this (evaluate.c, ll. 3762f.):
> 
> 	if (set_lookup(table, set->handle.set.name) == NULL)
> 		set_add_hash(set_get(set), table);
> 
> The `struct table` object returned by `table_lookup_global` is `t0`,
> since this was evaluated first and cached by `table_evaluate`, not `t1`.
> Therefore, `set_lookup` returns `NULL`, `set_add_hash` is called, `s` is
> added to `t0->sets`, and `t1->sets` is effectively corrupted.  It now
> contains two elements which point to each other, and one of them is not
> a set at all, but `t0->sets`.  This results in a seg-fault when nft
> tries to free `t1`.
> 
> I _think_ that the following is all that is needed to fix it:
> 
>   @@ -3759,7 +3759,8 @@ static int set_evaluate(struct eval_ctx *ctx, struct set *set)
>           }
>           ctx->set = NULL;
>    
>   -       if (set_lookup(table, set->handle.set.name) == NULL)
>   +       if (set_lookup(table, set->handle.set.name) == NULL &&
>   +           list_empty(&set->list))
>                   set_add_hash(set_get(set), table);
>    
>           return 0;
> 
> Does this look good to you?

Forgot to run the test-suite.  Doing so revealed that this doesn't quite
work because `set_alloc` doesn't initialize `s->list`.  This, however,
does:

  diff --git a/src/evaluate.c b/src/evaluate.c
  index 232ae39020cc..c58e37e14064 100644
  --- a/src/evaluate.c
  +++ b/src/evaluate.c
  @@ -3759,7 +3759,8 @@ static int set_evaluate(struct eval_ctx *ctx, struct set *set)
          }
          ctx->set = NULL;
   
  -       if (set_lookup(table, set->handle.set.name) == NULL)
  +       if (set_lookup(table, set->handle.set.name) == NULL &&
  +           list_empty(&set->list))
                  set_add_hash(set_get(set), table);
   
          return 0;
  diff --git a/src/rule.c b/src/rule.c
  index c23f87f47ae2..365feec08c32 100644
  --- a/src/rule.c
  +++ b/src/rule.c
  @@ -339,6 +339,7 @@ struct set *set_alloc(const struct location *loc)
          if (loc != NULL)
                  set->location = *loc;
   
  +       init_list_head(&set->list);
          init_list_head(&set->stmt_list);
   
          return set;
  @@ -360,6 +361,7 @@ struct set *set_clone(const struct set *set)
          new_set->policy         = set->policy;
          new_set->automerge      = set->automerge;
          new_set->desc           = set->desc;
  +       init_list_head(&new_set->list);
          init_list_head(&new_set->stmt_list);
   
          return new_set;

Alternatively, we could continue adding the set to the cached table, but
without the seg-fault:

  diff --git a/src/evaluate.c b/src/evaluate.c
  index 232ae39020cc..23ff982b73f0 100644
  --- a/src/evaluate.c
  +++ b/src/evaluate.c
  @@ -3760,7 +3760,7 @@ static int set_evaluate(struct eval_ctx *ctx, struct set *set)
          ctx->set = NULL;
   
          if (set_lookup(table, set->handle.set.name) == NULL)
  -               set_add_hash(set_get(set), table);
  +               set_add_hash(set, table);
   
          return 0;
   }
  diff --git a/src/rule.c b/src/rule.c
  index c23f87f47ae2..0aaefc54c30d 100644
  --- a/src/rule.c
  +++ b/src/rule.c
  @@ -339,6 +339,7 @@ struct set *set_alloc(const struct location *loc)
          if (loc != NULL)
                  set->location = *loc;
   
  +       init_list_head(&set->list);
          init_list_head(&set->stmt_list);
   
          return set;
  @@ -360,6 +361,7 @@ struct set *set_clone(const struct set *set)
          new_set->policy         = set->policy;
          new_set->automerge      = set->automerge;
          new_set->desc           = set->desc;
  +       init_list_head(&new_set->list);
          init_list_head(&new_set->stmt_list);
   
          return new_set;
  @@ -391,7 +393,10 @@ void set_free(struct set *set)
   
   void set_add_hash(struct set *set, struct table *table)
   {
  -       list_add_tail(&set->list, &table->sets);
  +       if (list_empty(&set->list))
  +               list_add_tail(&set_get(set)->list, &table->sets);
  +       else
  +               list_move_tail(&set->list, &table->sets);
   }
   
   struct set *set_lookup(const struct table *table, const char *name)
  
J.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux