On 2024/2/27 01:38, Christoph Lameter (Ampere) wrote: > On Fri, 23 Feb 2024, Vlastimil Babka wrote: > >> On 2/23/24 10:37, Chengming Zhou wrote: >>> On 2024/2/23 17:24, Vlastimil Babka wrote: >>>> >>>>>> >>>>> >>>>> I think this is a better direction! We can use RCU list if slab can be freed by RCU. >>>> >>>> Often we remove slab from the partial list for other purposes than freeing - >>>> i.e. to become a cpu (partial) slab, and that can't be handled by a rcu >>>> callback nor can we wait a grace period in such situations. >>> >>> IMHO, only free_slab() need to use call_rcu() to delay free the slab, >>> other paths like taking partial slabs from node partial list don't need >>> to wait for RCU grace period. >>> >>> All we want is safely lockless iterate over the node partial list, right? >> >> Yes, and for that there's the "list_head slab_list", which is in union with >> "struct slab *next" and "int slabs" for the cpu partial list. So if we >> remove a slab from the partial list and rewrite the list_head for the >> partial list purposes, it will break the lockless iterators, right? We would >> have to wait a grace period between unlinking the slab from partial list (so >> no new iterators can reach it), and reusing the list_head (so we are sure >> the existing iterators stopped looking at our slab). > > We could mark the state change (list ownership) in the slab metadata and then abort the scan if the state mismatches the list. It seems feasible, maybe something like below? But this way needs all kmem_caches have SLAB_TYPESAFE_BY_RCU, right? Not sure if this is acceptable? Which may cause random delay of memory free. ``` retry: rcu_read_lock(); h = rcu_dereference(list_next_rcu(&n->partial)); while (h != &n->partial) { slab = list_entry(h, struct slab, slab_list); /* Recheck slab with node list lock. */ spin_lock_irqsave(&n->list_lock, flags); if (!slab_test_node_partial(slab)) { spin_unlock_irqrestore(&n->list_lock, flags); rcu_read_unlock(); goto retry; } /* Count this slab's inuse. */ /* Get the next pointer with node list lock. */ h = rcu_dereference(list_next_rcu(h)); spin_unlock_irqrestore(&n->list_lock, flags); } rcu_read_unlock(); ``` > >> Maybe there's more advanced rcu tricks but this is my basic understanding >> how this works. > > This could get tricky but we already do similar things with RCU slabs objects/metadata where we allow the resuse of the object before the RCU period expires and there is an understanding that the user of those objects need to verify the type of object matching expectations when looking for objects. >