This is a note to let you know that I've just added the patch titled netfilter: nft_set_rbtree: fix null deref on element insertion to the 5.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: netfilter-nft_set_rbtree-fix-null-deref-on-element-insertion.patch and it can be found in the queue-5.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From stable-owner@xxxxxxxxxxxxxxx Tue Nov 21 12:13:55 2023 From: Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx> Date: Tue, 21 Nov 2023 13:13:12 +0100 Subject: netfilter: nft_set_rbtree: fix null deref on element insertion To: netfilter-devel@xxxxxxxxxxxxxxx Cc: gregkh@xxxxxxxxxxxxxxxxxxx, sashal@xxxxxxxxxx, stable@xxxxxxxxxxxxxxx Message-ID: <20231121121333.294238-6-pablo@xxxxxxxxxxxxx> From: Florian Westphal <fw@xxxxxxxxx> commit 61ae320a29b0540c16931816299eb86bf2b66c08 upstream. There is no guarantee that rb_prev() will not return NULL in nft_rbtree_gc_elem(): general protection fault, probably for non-canonical address 0xdffffc0000000003: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000018-0x000000000000001f] nft_add_set_elem+0x14b0/0x2990 nf_tables_newsetelem+0x528/0xb30 Furthermore, there is a possible use-after-free while iterating, 'node' can be free'd so we need to cache the next value to use. Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection") Signed-off-by: Florian Westphal <fw@xxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- net/netfilter/nft_set_rbtree.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) --- a/net/netfilter/nft_set_rbtree.c +++ b/net/netfilter/nft_set_rbtree.c @@ -220,7 +220,7 @@ static int nft_rbtree_gc_elem(const stru { struct nft_set *set = (struct nft_set *)__set; struct rb_node *prev = rb_prev(&rbe->node); - struct nft_rbtree_elem *rbe_prev; + struct nft_rbtree_elem *rbe_prev = NULL; struct nft_set_gc_batch *gcb; gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC); @@ -228,17 +228,21 @@ static int nft_rbtree_gc_elem(const stru return -ENOMEM; /* search for expired end interval coming before this element. */ - do { + while (prev) { rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node); if (nft_rbtree_interval_end(rbe_prev)) break; prev = rb_prev(prev); - } while (prev != NULL); + } + + if (rbe_prev) { + rb_erase(&rbe_prev->node, &priv->root); + atomic_dec(&set->nelems); + } - rb_erase(&rbe_prev->node, &priv->root); rb_erase(&rbe->node, &priv->root); - atomic_sub(2, &set->nelems); + atomic_dec(&set->nelems); nft_set_gc_batch_add(gcb, rbe); nft_set_gc_batch_complete(gcb); @@ -267,7 +271,7 @@ static int __nft_rbtree_insert(const str struct nft_set_ext **ext) { struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL; - struct rb_node *node, *parent, **p, *first = NULL; + struct rb_node *node, *next, *parent, **p, *first = NULL; struct nft_rbtree *priv = nft_set_priv(set); u8 genmask = nft_genmask_next(net); int d, err; @@ -306,7 +310,9 @@ static int __nft_rbtree_insert(const str * Values stored in the tree are in reversed order, starting from * highest to lowest value. */ - for (node = first; node != NULL; node = rb_next(node)) { + for (node = first; node != NULL; node = next) { + next = rb_next(node); + rbe = rb_entry(node, struct nft_rbtree_elem, node); if (!nft_set_elem_active(&rbe->ext, genmask)) Patches currently in stable-queue which might be from stable-owner@xxxxxxxxxxxxxxx are queue-5.4/netfilter-nf_tables-fix-memleak-when-more-than-255-elements-expired.patch queue-5.4/netfilter-nft_set_rbtree-fix-overlap-expiration-walk.patch queue-5.4/netfilter-nf_tables-use-correct-lock-to-protect-gc_list.patch queue-5.4/netfilter-nf_tables-disable-toggling-dormant-table-state-more-than-once.patch queue-5.4/netfilter-nf_tables-gc-transaction-race-with-netns-dismantle.patch queue-5.4/netfilter-nf_tables-drop-map-element-references-from-preparation-phase.patch queue-5.4/netfilter-nf_tables-fix-gc-transaction-races-with-netns-and-netlink-event-exit-path.patch queue-5.4/netfilter-nf_tables-don-t-skip-expired-elements-during-walk.patch queue-5.4/netfilter-nf_tables-remove-busy-mark-and-gc-batch-api.patch queue-5.4/netfilter-nf_tables-gc-transaction-race-with-abort-path.patch queue-5.4/netfilter-nf_tables-unregister-flowtable-hooks-on-netns-exit.patch queue-5.4/netfilter-nft_set_rbtree-switch-to-node-list-walk-for-overlap-detection.patch queue-5.4/netfilter-nf_tables-adapt-set-backend-to-use-gc-transaction-api.patch queue-5.4/netfilter-nftables-rename-set-element-data-activation-deactivation-functions.patch queue-5.4/netfilter-nft_set_rbtree-skip-sync-gc-for-new-elements-in-this-transaction.patch queue-5.4/netfilter-nf_tables-pass-context-to-nft_set_destroy.patch queue-5.4/netfilter-nf_tables-bogus-ebusy-when-deleting-flowtable-after-flush-for-5.4.patch queue-5.4/netfilter-nft_set_hash-try-later-when-gc-hits-eagain-on-iteration.patch queue-5.4/netfilter-nf_tables-defer-gc-run-if-previous-batch-is-still-pending.patch queue-5.4/netfilter-nft_set_rbtree-use-read-spinlock-to-avoid-datapath-contention.patch queue-5.4/netfilter-nf_tables-double-hook-unregistration-in-netns-path.patch queue-5.4/netfilter-nftables-update-table-flags-from-the-commit-phase.patch queue-5.4/netfilter-nft_set_hash-mark-set-element-as-dead-when-deleting-from-packet-path.patch queue-5.4/netfilter-nf_tables-gc-transaction-api-to-avoid-race-with-control-plane.patch queue-5.4/netfilter-nf_tables-fix-table-flag-updates.patch queue-5.4/netfilter-nft_set_rbtree-fix-null-deref-on-element-insertion.patch