On 2023/10/27 23:18, Vlastimil Babka wrote: > On 10/24/23 11:33, chengming.zhou@xxxxxxxxx wrote: >> From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> >> >> Now the partial slub will be frozen when taken out of node partial list, > > partially empty slab > >> so the __slab_free() will know from "was_frozen" that the partial slab >> is not on node partial list and is used by one kmem_cache_cpu. > > ... is a cpu or cpu partial slab of some cpu. > >> But we will change this, make partial slabs leave the node partial list >> with unfrozen state, so we need to change __slab_free() to use the new >> slab_test_node_partial() we just introduced. >> >> Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> >> --- >> mm/slub.c | 11 +++++++++++ >> 1 file changed, 11 insertions(+) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index 3fad4edca34b..f568a32d7332 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3610,6 +3610,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, >> unsigned long counters; >> struct kmem_cache_node *n = NULL; >> unsigned long flags; >> + bool on_node_partial; >> >> stat(s, FREE_SLOWPATH); >> >> @@ -3657,6 +3658,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, >> */ >> spin_lock_irqsave(&n->list_lock, flags); >> >> + on_node_partial = slab_test_node_partial(slab); >> } >> } >> >> @@ -3685,6 +3687,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, >> return; >> } >> >> + /* >> + * This slab was partial but not on the per-node partial list, > > This slab was partially empty ... > > Otherwise LGTM! Ok, will fix. Thanks! > >> + * in which case we shouldn't manipulate its list, just return. >> + */ >> + if (prior && !on_node_partial) { >> + spin_unlock_irqrestore(&n->list_lock, flags); >> + return; >> + } >> + >> if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) >> goto slab_empty; >> >