On 9/16/21 14:39, Miaohe Lin wrote: > kmem_cache_free_bulk() will call memcg_slab_free_hook() for all objects > when doing bulk free. So we shouldn't call memcg_slab_free_hook() again > for bulk free to avoid incorrect memcg slab count. > > Fixes: d1b2cf6cb84a ("mm: memcg/slab: uncharge during kmem_cache_free_bulk()") > Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> I now noticed the series doesn't Cc: stable and it should, so I hope Andrew can add those together with the review tags. Thanks. > --- > mm/slub.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index f3df0f04a472..d8f77346376d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3420,7 +3420,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s, > struct kmem_cache_cpu *c; > unsigned long tid; > > - memcg_slab_free_hook(s, &head, 1); > + /* memcg_slab_free_hook() is already called for bulk free. */ > + if (!tail) > + memcg_slab_free_hook(s, &head, 1); > redo: > /* > * Determine the currently cpus per cpu slab. >