The patch titled Subject: mm: slub: call account_slab_page() after slab page initialization has been added to the -mm tree. Its filename is mm-slub-call-account_slab_page-after-slab-page-initialization.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-slub-call-account_slab_page-after-slab-page-initialization.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-slub-call-account_slab_page-after-slab-page-initialization.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Roman Gushchin <guro@xxxxxx> Subject: mm: slub: call account_slab_page() after slab page initialization It's convenient to have page->objects initialized before calling into account_slab_page(). In particular, this information can be used to pre-alloc the obj_cgroup vector. Let's call account_slab_page() a bit later, after the initialization of page->objects. This commit doesn't bring any functional change, but is required for further optimizations. Link: https://lkml.kernel.org/r/20201110195753.530157-1-guro@xxxxxx Signed-off-by: Roman Gushchin <guro@xxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) --- a/mm/slub.c~mm-slub-call-account_slab_page-after-slab-page-initialization +++ a/mm/slub.c @@ -1616,9 +1616,6 @@ static inline struct page *alloc_slab_pa else page = __alloc_pages_node(node, flags, order); - if (page) - account_slab_page(page, order, s); - return page; } @@ -1771,6 +1768,8 @@ static struct page *allocate_slab(struct page->objects = oo_objects(oo); + account_slab_page(page, oo_order(oo), s, flags); + page->slab_cache = s; __SetPageSlab(page); if (page_is_pfmemalloc(page)) _ Patches currently in -mm which might be from guro@xxxxxx are mm-memcontrol-use-helpers-to-read-pages-memcg-data.patch mm-memcontrol-slab-use-helpers-to-access-slab-pages-memcg_data.patch mm-introduce-page-memcg-flags.patch mm-convert-page-kmemcg-type-to-a-page-memcg-flag.patch mm-memcg-fix-obsolete-code-comments.patch mm-slub-call-account_slab_page-after-slab-page-initialization.patch mm-memcg-slab-pre-allocate-obj_cgroups-for-slab-caches-with-slab_account.patch mm-vmstat-fix-proc-sys-vm-stat_refresh-generating-false-warnings.patch mm-vmstat-fix-proc-sys-vm-stat_refresh-generating-false-warnings-fix.patch