The patch titled Subject: mm: define __init_reserved_page_zone function has been added to the -mm mm-unstable branch. Its filename is mm-define-__init_reserved_page_zone-function.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-define-__init_reserved_page_zone-function.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Frank van der Linden <fvdl@xxxxxxxxxx> Subject: mm: define __init_reserved_page_zone function Date: Wed, 29 Jan 2025 22:41:42 +0000 Sometimes page structs must be unconditionally initialized as reserved, regardless of DEFERRED_STRUCT_PAGE_INIT. Define a function, __init_reserved_page_zone, containing code that already did all of the work in init_reserved_page, and make it available for use. Link: https://lkml.kernel.org/r/20250129224157.2046079-14-fvdl@xxxxxxxxxx Signed-off-by: Frank van der Linden <fvdl@xxxxxxxxxx> Cc: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Heiko Carstens <hca@xxxxxxxxxxxxx> Cc: Joao Martins <joao.m.martins@xxxxxxxxxx> Cc: Madhavan Srinivasan <maddy@xxxxxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Roman Gushchin (Cruise) <roman.gushchin@xxxxxxxxx> Cc: Usama Arif <usamaarif642@xxxxxxxxx> Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> Cc: Zhenguo Yao <yaozhenguo1@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/internal.h | 1 + mm/mm_init.c | 38 +++++++++++++++++++++++--------------- 2 files changed, 24 insertions(+), 15 deletions(-) --- a/mm/internal.h~mm-define-__init_reserved_page_zone-function +++ a/mm/internal.h @@ -1448,6 +1448,7 @@ static inline bool pte_needs_soft_dirty_ void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long zone, int nid); +void __meminit __init_reserved_page_zone(unsigned long pfn, int nid); /* shrinker related functions */ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, --- a/mm/mm_init.c~mm-define-__init_reserved_page_zone-function +++ a/mm/mm_init.c @@ -650,6 +650,28 @@ static inline void fixup_hashdist(void) static inline void fixup_hashdist(void) {} #endif /* CONFIG_NUMA */ +/* + * Initialize a reserved page unconditionally, finding its zone first. + */ +void __meminit __init_reserved_page_zone(unsigned long pfn, int nid) +{ + pg_data_t *pgdat; + int zid; + + pgdat = NODE_DATA(nid); + + for (zid = 0; zid < MAX_NR_ZONES; zid++) { + struct zone *zone = &pgdat->node_zones[zid]; + + if (zone_spans_pfn(zone, pfn)) + break; + } + __init_single_page(pfn_to_page(pfn), pfn, zid, nid); + + if (pageblock_aligned(pfn)) + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); +} + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT static inline void pgdat_set_deferred_range(pg_data_t *pgdat) { @@ -708,24 +730,10 @@ defer_init(int nid, unsigned long pfn, u static void __meminit init_reserved_page(unsigned long pfn, int nid) { - pg_data_t *pgdat; - int zid; - if (early_page_initialised(pfn, nid)) return; - pgdat = NODE_DATA(nid); - - for (zid = 0; zid < MAX_NR_ZONES; zid++) { - struct zone *zone = &pgdat->node_zones[zid]; - - if (zone_spans_pfn(zone, pfn)) - break; - } - __init_single_page(pfn_to_page(pfn), pfn, zid, nid); - - if (pageblock_aligned(pfn)) - set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); + __init_reserved_page_zone(pfn, nid); } #else static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} _ Patches currently in -mm which might be from fvdl@xxxxxxxxxx are mm-cma-export-total-and-free-number-of-pages-for-cma-areas.patch mm-cma-support-multiple-contiguous-ranges-if-requested.patch mm-cma-introduce-cma_intersects-function.patch mm-hugetlb-use-cma_declare_contiguous_multi.patch mm-hugetlb-fix-round-robin-bootmem-allocation.patch mm-hugetlb-remove-redundant-__clearpagereserved.patch mm-hugetlb-use-online-nodes-for-bootmem-allocation.patch mm-hugetlb-convert-cmdline-parameters-from-setup-to-early.patch x86-mm-make-register_page_bootmem_memmap-handle-pte-mappings.patch mm-bootmem_info-export-register_page_bootmem_memmap.patch mm-sparse-allow-for-alternate-vmemmap-section-init-at-boot.patch mm-hugetlb-set-migratetype-for-bootmem-folios.patch mm-define-__init_reserved_page_zone-function.patch mm-hugetlb-check-bootmem-pages-for-zone-intersections.patch mm-sparse-add-vmemmap__hvo-functions.patch mm-hugetlb-deal-with-multiple-calls-to-hugetlb_bootmem_alloc.patch mm-hugetlb-move-huge_boot_pages-list-init-to-hugetlb_bootmem_alloc.patch mm-hugetlb-add-pre-hvo-framework.patch mm-hugetlb_vmemmap-fix-hugetlb_vmemmap_restore_folios-definition.patch mm-hugetlb-do-pre-hvo-for-bootmem-allocated-pages.patch x86-setup-call-hugetlb_bootmem_alloc-early.patch x86-mm-set-arch_want_sparsemem_vmemmap_preinit.patch mm-cma-simplify-zone-intersection-check.patch mm-cma-introduce-a-cma-validate-function.patch mm-cma-introduce-interface-for-early-reservations.patch mm-hugetlb-add-hugetlb_cma_only-cmdline-option.patch mm-hugetlb-enable-bootmem-allocation-from-cma-areas.patch mm-hugetlb-move-hugetlb-cma-code-in-to-its-own-file.patch