Hi Taehee, On Tue, Jul 10, 2018 at 11:22:01PM +0900, Taehee Yoo wrote: > This patch fixes below. > 1. check null pointer of rb_next. > rb_next can return null. so null check routine should be added. > 2. add rcu_barrier in destroy routine. > GC uses call_rcu to remove elements. but all elements should be > removed before destroying set and chains. so that rcu_barrier is added. [...] > diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c > index 1f8f257..09e3a15 100644 > --- a/net/netfilter/nft_set_rbtree.c > +++ b/net/netfilter/nft_set_rbtree.c > @@ -381,7 +381,7 @@ static void nft_rbtree_gc(struct work_struct *work) > > gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC); > if (!gcb) > - goto out; > + break; > > atomic_dec(&set->nelems); > nft_set_gc_batch_add(gcb, rbe); > @@ -390,10 +390,12 @@ static void nft_rbtree_gc(struct work_struct *work) > rbe = rb_entry(prev, struct nft_rbtree_elem, node); > atomic_dec(&set->nelems); > nft_set_gc_batch_add(gcb, rbe); > + prev = NULL; > } > node = rb_next(node); > + if (!node) > + break; > } > -out: > if (gcb) { > for (i = 0; i < gcb->head.cnt; i++) { > rbe = gcb->elems[i]; > @@ -440,9 +442,10 @@ static void nft_rbtree_destroy(const struct nft_set *set) > struct rb_node *node; > > cancel_delayed_work_sync(&priv->gc_work); > + rcu_barrier(); > while ((node = priv->root.rb_node) != NULL) { > - rb_erase(node, &priv->root); > rbe = rb_entry(node, struct nft_rbtree_elem, node); > + rb_erase(node, &priv->root); Just to clarify: Do we have to reorder these lines? I mean, place rb_erase() after rb_entry(). Just asking because this is not described in the patch description - I just came from long trip, I'm tired so I may be overlooking anything. No need to resend in any case. Let me know, thanks! -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html