Eric Dumazet <eric.dumazet@xxxxxxxxx> wrote: > > +/* caller must hold rcu readlock and none of the nf_conntrack_locks */ > > +static void nf_ct_gc_expired(struct nf_conn *ct) > > +{ > > + if (!atomic_inc_not_zero(&ct->ct_general.use)) > > + return; > > + > > + if (nf_ct_should_gc(ct)) > > + nf_ct_kill(ct); > > + > > + nf_ct_put(ct); > > +} > > + > > /* > > * Warning : > > * - Caller must take a reference on returned object > > @@ -499,6 +505,17 @@ begin: > > bucket = reciprocal_scale(hash, hsize); > > > > hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[bucket], hnnode) { > > + struct nf_conn *ct; > > + > > + ct = nf_ct_tuplehash_to_ctrack(h); > > + if (nf_ct_is_expired(ct)) { > > + nf_ct_gc_expired(ct); > > + continue; > > + } > > + > > + if (nf_ct_is_dying(ct)) > > + continue; > > + > > if (nf_ct_key_equal(h, tuple, zone, net)) { > > NF_CT_STAT_INC_ATOMIC(net, found); > > return h; > > Florian, I do not see how this part is safe against concurrent lookups > and deletes ? > > At least the hlist_nulls_for_each_entry_rcu() looks buggy, since > fetching the next pointer would trigger a use after free ? Hmm, ____nf_conntrack_find caller needs to hold rcu_read_lock, in case object is free'd SLAB_DESTROY_BY_RCU should delay actual release of the page. Should be same as (old) 'death_by_timeout' timer firing during hlist_nulls_for_each_entry_rcu walk. Thanks for looking at this Eric! -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html