The patch titled Subject: mm: memcontrol: fix slub memory accounting has been added to the -mm tree. Its filename is mm-memcontrol-fix-slub-memory-accounting.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-memcontrol-fix-slub-memory-accounting.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-memcontrol-fix-slub-memory-accounting.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Muchun Song <songmuchun@xxxxxxxxxxxxx> Subject: mm: memcontrol: fix slub memory accounting SLUB currently account kmalloc() and kmalloc_node() allocations larger than order-1 page per-node. But it forget to update the per-memcg vmstats. So it can lead to inaccurate statistics of "slab_unreclaimable" which is from memory.stat. Fix it by using mod_lruvec_page_state instead of mod_node_page_state. Link: https://lkml.kernel.org/r/20210223092423.42420-1-songmuchun@xxxxxxxxxxxxx Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting") Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Reviewed-by: Roman Gushchin <guro@xxxxxx> Reviewed-by: Michal Koutný <mkoutny@xxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slab_common.c | 4 ++-- mm/slub.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) --- a/mm/slab_common.c~mm-memcontrol-fix-slub-memory-accounting +++ a/mm/slab_common.c @@ -898,8 +898,8 @@ void *kmalloc_order(size_t size, gfp_t f page = alloc_pages(flags, order); if (likely(page)) { ret = page_address(page); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); } ret = kasan_kmalloc_large(ret, size, flags); /* As ret might get tagged, call kmemleak hook after KASAN. */ --- a/mm/slub.c~mm-memcontrol-fix-slub-memory-accounting +++ a/mm/slub.c @@ -4042,8 +4042,8 @@ static void *kmalloc_large_node(size_t s page = alloc_pages_node(node, flags, order); if (page) { ptr = page_address(page); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); } return kmalloc_large_node_hook(ptr, size, flags); @@ -4174,8 +4174,8 @@ void kfree(const void *x) BUG_ON(!PageCompound(page)); kfree_hook(object); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); __free_pages(page, order); return; } _ Patches currently in -mm which might be from songmuchun@xxxxxxxxxxxxx are mm-memcontrol-optimize-per-lruvec-stats-counter-memory-usage.patch mm-memcontrol-fix-nr_anon_thps-accounting-in-charge-moving.patch mm-memcontrol-convert-nr_anon_thps-account-to-pages.patch mm-memcontrol-convert-nr_file_thps-account-to-pages.patch mm-memcontrol-convert-nr_shmem_thps-account-to-pages.patch mm-memcontrol-convert-nr_shmem_pmdmapped-account-to-pages.patch mm-memcontrol-convert-nr_file_pmdmapped-account-to-pages.patch mm-memcontrol-make-the-slab-calculation-consistent.patch mm-memcontrol-replace-the-loop-with-a-list_for_each_entry.patch mm-memcontrol-fix-swap-undercounting-in-cgroup2.patch mm-memcontrol-fix-get_active_memcg-return-value.patch mm-memcontrol-fix-slub-memory-accounting.patch hugetlb-convert-page_huge_active-hpagemigratable-flag-fix.patch