The patch titled Subject: mm/mm_init.c: remove the useless dma_reserve has been added to the -mm mm-unstable branch. Its filename is mm-mm_initc-remove-the-useless-dma_reserve.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-mm_initc-remove-the-useless-dma_reserve.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Baoquan He <bhe@xxxxxxxxxx> Subject: mm/mm_init.c: remove the useless dma_reserve Date: Mon, 18 Mar 2024 22:21:33 +0800 Patch series "mm/mm_init.c: refactor free_area_init_core()". In free_area_init_core(), the code calculating zone->managed_pages and the subtracting dma_reserve from DMA zone looks very confusing. >From git history, the code calculating zone->managed_pages was for zone->present_pages originally. The early rough assignment is for optimize zone's pcp and water mark setting. Later, managed_pages was introduced into zone to represent the number of managed pages by buddy. Now, zone->managed_pages is zeroed out and reset in mem_init() when calling memblock_free_all(). zone's pcp and wmark setting relying on actual zone->managed_pages are done later than mem_init() invocation. So we don't need rush to early calculate and set zone->managed_pages, just set it as zone->present_pages, will adjust it in mem_init(). And also add a new function calc_nr_kernel_pages() to count up free but not reserved pages in memblock, then assign it to nr_all_pages and nr_kernel_pages after memmap pages are allocated. This patch (of 6): Variable dma_reserve and its usage was introduced in commit 0e0b864e069c ("[PATCH] Account for memmap and optionally the kernel image as holes"). Its original purpose was to accounting for the reserved pages in DMA zone to make DMA zone's watermarks calculation more accurate on x86. However, currently there's zone->managed_pages to account for all available pages for buddy, zone->present_pages to account for all present physical pages in zone. What is more important, on x86, calculating and setting the zone->managed_pages is a temporary move, all zone's managed_pages will be zeroed out and reset to the actual value according to how many pages are added to buddy allocator in mem_init(). Before mem_init(), no buddy alloction is requested. And zone's pcp and watermark setting are all done after mem_init(). So, no need to worry about the DMA zone's setting accuracy during free_area_init(). Hence, remove dma_reserve and its handling in free_area_init_core() because it's useless and causes confusion. Link: https://lkml.kernel.org/r/20240318142138.783350-1-bhe@xxxxxxxxxx Link: https://lkml.kernel.org/r/20240318142138.783350-2-bhe@xxxxxxxxxx Signed-off-by: Baoquan He <bhe@xxxxxxxxxx> Cc: Mike Rapoport (IBM) <rppt@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/x86/mm/init.c | 2 -- include/linux/mm.h | 1 - mm/mm_init.c | 23 ----------------------- 3 files changed, 26 deletions(-) --- a/arch/x86/mm/init.c~mm-mm_initc-remove-the-useless-dma_reserve +++ a/arch/x86/mm/init.c @@ -1032,8 +1032,6 @@ void __init memblock_find_dma_reserve(vo if (start_pfn < end_pfn) nr_free_pages += end_pfn - start_pfn; } - - set_dma_reserve(nr_pages - nr_free_pages); #endif } --- a/include/linux/mm.h~mm-mm_initc-remove-the-useless-dma_reserve +++ a/include/linux/mm.h @@ -3210,7 +3210,6 @@ static inline int early_pfn_to_nid(unsig extern int __meminit early_pfn_to_nid(unsigned long pfn); #endif -extern void set_dma_reserve(unsigned long new_dma_reserve); extern void mem_init(void); extern void __init mmap_init(void); --- a/mm/mm_init.c~mm-mm_initc-remove-the-useless-dma_reserve +++ a/mm/mm_init.c @@ -226,7 +226,6 @@ static unsigned long required_movablecor static unsigned long nr_kernel_pages __initdata; static unsigned long nr_all_pages __initdata; -static unsigned long dma_reserve __initdata; static bool deferred_struct_pages __meminitdata; @@ -1583,12 +1582,6 @@ static void __init free_area_init_core(s zone_names[j], memmap_pages, freesize); } - /* Account for reserved pages */ - if (j == 0 && freesize > dma_reserve) { - freesize -= dma_reserve; - pr_debug(" %s zone: %lu pages reserved\n", zone_names[0], dma_reserve); - } - if (!is_highmem_idx(j)) nr_kernel_pages += freesize; /* Charge for highmem memmap if there are enough kernel pages */ @@ -2547,22 +2540,6 @@ void *__init alloc_large_system_hash(con return table; } -/** - * set_dma_reserve - set the specified number of pages reserved in the first zone - * @new_dma_reserve: The number of pages to mark reserved - * - * The per-cpu batchsize and zone watermarks are determined by managed_pages. - * In the DMA zone, a significant percentage may be consumed by kernel image - * and other unfreeable allocations which can skew the watermarks badly. This - * function may optionally be used to account for unfreeable pages in the - * first zone (e.g., ZONE_DMA). The effect will be lower watermarks and - * smaller per-cpu batchsize. - */ -void __init set_dma_reserve(unsigned long new_dma_reserve) -{ - dma_reserve = new_dma_reserve; -} - void __init memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order) { _ Patches currently in -mm which might be from bhe@xxxxxxxxxx are mm-mm_initc-remove-the-useless-dma_reserve.patch x86-remove-memblock_find_dma_reserve.patch mm-mm_initc-add-new-function-calc_nr_kernel_pages.patch mm-mm_initc-remove-meaningless-calculation-of-zone-managed_pages-in-free_area_init_core.patch mm-mm_initc-remove-unneeded-calc_memmap_size.patch mm-mm_initc-remove-arch_reserved_kernel_pages.patch