The patch titled Subject: mm, slub: fix incorrect memcg slab count for bulk free has been added to the -mm tree. Its filename is mm-slub-fix-incorrect-memcg-slab-count-for-bulk-free.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-slub-fix-incorrect-memcg-slab-count-for-bulk-free.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-slub-fix-incorrect-memcg-slab-count-for-bulk-free.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Miaohe Lin <linmiaohe@xxxxxxxxxx> Subject: mm, slub: fix incorrect memcg slab count for bulk free kmem_cache_free_bulk() will call memcg_slab_free_hook() for all objects when doing bulk free. So we shouldn't call memcg_slab_free_hook() again for bulk free to avoid incorrect memcg slab count. Link: https://lkml.kernel.org/r/20210916123920.48704-6-linmiaohe@xxxxxxxxxx Fixes: d1b2cf6cb84a ("mm: memcg/slab: uncharge during kmem_cache_free_bulk()") Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Andrey Konovalov <andreyknvl@xxxxxxxxx> Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx> Cc: Bharata B Rao <bharata@xxxxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Faiyaz Mohammed <faiyazm@xxxxxxxxxxxxxx> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Kees Cook <keescook@xxxxxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Roman Gushchin <guro@xxxxxx> Cc: Thomas Garnier <thgarnie@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) --- a/mm/slub.c~mm-slub-fix-incorrect-memcg-slab-count-for-bulk-free +++ a/mm/slub.c @@ -3420,7 +3420,9 @@ static __always_inline void do_slab_free struct kmem_cache_cpu *c; unsigned long tid; - memcg_slab_free_hook(s, &head, 1); + /* memcg_slab_free_hook() is already called for bulk free. */ + if (!tail) + memcg_slab_free_hook(s, &head, 1); redo: /* * Determine the currently cpus per cpu slab. _ Patches currently in -mm which might be from linmiaohe@xxxxxxxxxx are mm-slub-fix-two-bugs-in-slab_debug_trace_open.patch mm-slub-fix-mismatch-between-reconstructed-freelist-depth-and-cnt.patch mm-slub-fix-potential-memoryleak-in-kmem_cache_open.patch mm-slub-fix-potential-use-after-free-in-slab_debugfs_fops.patch mm-slub-fix-incorrect-memcg-slab-count-for-bulk-free.patch mm-page_allocc-remove-meaningless-vm_bug_on-in-pindex_to_order.patch mm-page_allocc-simplify-the-code-by-using-macro-k.patch mm-page_allocc-fix-obsolete-comment-in-free_pcppages_bulk.patch mm-page_allocc-use-helper-function-zone_spans_pfn.patch mm-page_allocc-avoid-allocating-highmem-pages-via-alloc_pages_exact.patch mm-page_isolation-fix-potential-missing-call-to-unset_migratetype_isolate.patch mm-page_isolation-guard-against-possible-putback-unisolated-page.patch mm-memory_hotplug-make-hwpoisoned-dirty-swapcache-pages-unmovable.patch mm-zsmallocc-close-race-window-between-zs_pool_dec_isolated-and-zs_unregister_migration.patch mm-zsmallocc-combine-two-atomic-ops-in-zs_pool_dec_isolated.patch