The patch titled Subject: slab: allocate frozen pages has been added to the -mm mm-unstable branch. Its filename is slab-allocate-frozen-pages.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/slab-allocate-frozen-pages.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: slab: allocate frozen pages Date: Mon, 25 Nov 2024 21:01:47 +0000 Since slab does not use the page refcount, it can allocate and free frozen pages, saving one atomic operation per free. Link: https://lkml.kernel.org/r/20241125210149.2976098-16-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Reviewed-by: William Kucharski <william.kucharski@xxxxxxxxxx> Cc: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxxx> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/mm/slub.c~slab-allocate-frozen-pages +++ a/mm/slub.c @@ -2405,9 +2405,9 @@ static inline struct slab *alloc_slab_pa unsigned int order = oo_order(oo); if (node == NUMA_NO_NODE) - folio = (struct folio *)alloc_pages(flags, order); + folio = (struct folio *)alloc_frozen_pages(flags, order); else - folio = (struct folio *)__alloc_pages_node(node, flags, order); + folio = (struct folio *)__alloc_frozen_pages(flags, order, node, NULL); if (!folio) return NULL; @@ -2641,7 +2641,7 @@ static void __free_slab(struct kmem_cach __folio_clear_slab(folio); mm_account_reclaimed_pages(pages); unaccount_slab(slab, order, s); - __free_pages(&folio->page, order); + free_frozen_pages(&folio->page, order); } static void rcu_free_slab(struct rcu_head *h) _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are mm-page_alloc-cache-page_zone-result-in-free_unref_page.patch mm-make-alloc_pages_mpol-static.patch mm-page_alloc-export-free_frozen_pages-instead-of-free_unref_page.patch mm-page_alloc-move-set_page_refcounted-to-callers-of-post_alloc_hook.patch mm-page_alloc-move-set_page_refcounted-to-callers-of-prep_new_page.patch mm-page_alloc-move-set_page_refcounted-to-callers-of-get_page_from_freelist.patch mm-page_alloc-move-set_page_refcounted-to-callers-of-__alloc_pages_cpuset_fallback.patch mm-page_alloc-move-set_page_refcounted-to-callers-of-__alloc_pages_may_oom.patch mm-page_alloc-move-set_page_refcounted-to-callers-of-__alloc_pages_direct_compact.patch mm-page_alloc-move-set_page_refcounted-to-callers-of-__alloc_pages_direct_reclaim.patch mm-page_alloc-move-set_page_refcounted-to-callers-of-__alloc_pages_slowpath.patch mm-page_alloc-move-set_page_refcounted-to-end-of-__alloc_pages.patch mm-page_alloc-add-__alloc_frozen_pages.patch mm-mempolicy-add-alloc_frozen_pages.patch slab-allocate-frozen-pages.patch