On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@xxxxxxxxxx> wrote: > > The document says we can avoid extra smp_rmb() in lockless_lookup() and > extra _release() in insert function when hlist_nulls is used. However, > the example code snippet for the insert function is still using the > extra _release(). Drop it. > > Signed-off-by: SeongJae Park <sj@xxxxxxxxxx> > --- > Documentation/RCU/rculist_nulls.rst | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst > index 5cd6f3f8810f..463270273d89 100644 > --- a/Documentation/RCU/rculist_nulls.rst > +++ b/Documentation/RCU/rculist_nulls.rst > @@ -191,7 +191,7 @@ scan the list again without harm. > obj = kmem_cache_alloc(cachep); > lock_chain(); // typically a spin_lock() > obj->key = key; > - atomic_set_release(&obj->refcnt, 1); // key before refcnt > + atomic_set(&obj->refcnt, 1); > /* > * insert obj in RCU way (readers might be traversing chain) > */ If write to ->refcnt of 1 is reordered with setting of ->key, what prevents the 'lookup algorithm' from doing a key match (obj->key == key) before the refcount has been initialized? Are we sure the reordering mentioned in the document is the same as the reordering prevented by the atomic_set_release()? For the other 3 patches, feel free to add: Reviewed-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> thanks, - Joel