The patch titled Subject: mm/mm_init.c: add new function calc_nr_all_pages() has been added to the -mm mm-unstable branch. Its filename is mm-mm_initc-add-new-function-calc_nr_all_pages.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-mm_initc-add-new-function-calc_nr_all_pages.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Baoquan He <bhe@xxxxxxxxxx> Subject: mm/mm_init.c: add new function calc_nr_all_pages() Date: Mon, 25 Mar 2024 22:56:43 +0800 This is a preparation to calculate nr_kernel_pages and nr_all_pages, both of which will be used later in alloc_large_system_hash(). nr_all_pages counts up all free but not reserved memory in memblock allocator, including HIGHMEM memory. While nr_kernel_pages counts up all free but not reserved low memory in memblock allocator, excluding HIGHMEM memory. Link: https://lkml.kernel.org/r/20240325145646.1044760-4-bhe@xxxxxxxxxx Signed-off-by: Baoquan He <bhe@xxxxxxxxxx> Cc: "Mike Rapoport (IBM)" <rppt@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mm_init.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) --- a/mm/mm_init.c~mm-mm_initc-add-new-function-calc_nr_all_pages +++ a/mm/mm_init.c @@ -1265,6 +1265,30 @@ static void __init reset_memoryless_node pr_debug("On node %d totalpages: 0\n", pgdat->node_id); } +static void __init calc_nr_kernel_pages(void) +{ + unsigned long start_pfn, end_pfn; + phys_addr_t start_addr, end_addr; + u64 u; +#ifdef CONFIG_HIGHMEM + unsigned long high_zone_low = arch_zone_lowest_possible_pfn[ZONE_HIGHMEM]; +#endif + + for_each_free_mem_range(u, NUMA_NO_NODE, MEMBLOCK_NONE, &start_addr, &end_addr, NULL) { + start_pfn = PFN_UP(start_addr); + end_pfn = PFN_DOWN(end_addr); + + if (start_pfn < end_pfn) { + nr_all_pages += end_pfn - start_pfn; +#ifdef CONFIG_HIGHMEM + start_pfn = clamp(start_pfn, 0, high_zone_low); + end_pfn = clamp(end_pfn, 0, high_zone_low); +#endif + nr_kernel_pages += end_pfn - start_pfn; + } + } +} + static void __init calculate_node_totalpages(struct pglist_data *pgdat, unsigned long node_start_pfn, unsigned long node_end_pfn) _ Patches currently in -mm which might be from bhe@xxxxxxxxxx are mm-vmallocc-optimize-to-reduce-arguments-of-alloc_vmap_area.patch x86-remove-unneeded-memblock_find_dma_reserve.patch mm-mm_initc-remove-the-useless-dma_reserve.patch mm-mm_initc-add-new-function-calc_nr_all_pages.patch mm-mm_initc-remove-meaningless-calculation-of-zone-managed_pages-in-free_area_init_core.patch mm-mm_initc-remove-unneeded-calc_memmap_size.patch mm-mm_initc-remove-arch_reserved_kernel_pages.patch