On 1/17/24 12:45, Chengming Zhou wrote: > Since debug slab is processed by free_to_partial_list(), and only debug > slab which has SLAB_STORE_USER flag would care about the full list, we > can remove these unrelated full list manipulations from __slab_free(). Well spotted. > Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> > --- > mm/slub.c | 4 ---- > 1 file changed, 4 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 20c03555c97b..f0307e8b4cd2 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -4187,7 +4187,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > * then add it. > */ > if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { > - remove_full(s, n, slab); > add_partial(n, slab, DEACTIVATE_TO_TAIL); > stat(s, FREE_ADD_PARTIAL); > } > @@ -4201,9 +4200,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > */ > remove_partial(n, slab); > stat(s, FREE_REMOVE_PARTIAL); > - } else { > - /* Slab must be on the full list */ > - remove_full(s, n, slab); > } > > spin_unlock_irqrestore(&n->list_lock, flags); >