In function free_area_init_core(), the code calculating zone->managed_pages and the subtracting dma_reserve from DMA zone looks very confusing. >From git history, the code calculating zone->managed_pages was for zone->present_pages originally. The early rough assignment is for optimize zone's pcp and water mark setting. Later, managed_pages was introduced into zone to represent the number of managed pages by buddy. Now, zone->managed_pages is zeroed out and reset in mem_init() when calling memblock_free_all(). zone's pcp and wmark setting relying on actual zone->managed_pages are done later than mem_init() invocation. So we don't need rush to early calculate and set zone->managed_pages, just set it as zone->present_pages, will adjust it in mem_init(). And also add a new function calc_nr_kernel_pages() to count up free but not reserved pages in memblock, then assign it to nr_all_pages and nr_kernel_pages after memmap pages are allocated. Changelog: ---------- v1->v2: ======= These are all suggested by Mike, thanks to him. - Swap the order of patch 1 and 2 in v1 to describe code change better, Mike suggested this. - Change to initializ zone->managed_pages as 0 in free_area_init_core() as there isn't any page added into buddy system. And also improve the ambiguous description in log. These are all in patch 4. Baoquan He (6): x86: remove unneeded memblock_find_dma_reserve() mm/mm_init.c: remove the useless dma_reserve mm/mm_init.c: add new function calc_nr_all_pages() mm/mm_init.c: remove meaningless calculation of zone->managed_pages in free_area_init_core() mm/mm_init.c: remove unneeded calc_memmap_size() mm/mm_init.c: remove arch_reserved_kernel_pages() arch/powerpc/include/asm/mmu.h | 4 -- arch/powerpc/kernel/fadump.c | 5 -- arch/x86/include/asm/pgtable.h | 1 - arch/x86/kernel/setup.c | 2 - arch/x86/mm/init.c | 47 ------------- include/linux/mm.h | 4 -- mm/mm_init.c | 125 ++++++++------------------------- 7 files changed, 29 insertions(+), 159 deletions(-) -- 2.41.0