The invocation of zone_init_internals(zone, j, nid, zone->present_pages) initializes zone->managed_pages to zone->present_pages. zone->present_pages is not 0 when the zone is not empty. See commit 0ac5e785dcb797 ("mm/mm_init.c: remove meaningless calculation of zone->managed_pages in free_area_init_core()") for details. Signed-off-by: Jiwen Qi <jiwen7.qi@xxxxxxxxx> --- mm/mm_init.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 24b68b425afb..48a4f661db98 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1567,8 +1567,9 @@ static void __init free_area_init_core(struct pglist_data *pgdat) unsigned long size = zone->spanned_pages; /* - * Initialize zone->managed_pages as 0 , it will be reset - * when memblock allocator frees pages into buddy system. + * Initialize zone->managed_pages to zone->present_pages as a first rough + * estimate. memblock_free_all() will reset zone->managed_pages to 0, and + * calculate the actual managed pages as they are freed to the buddy. */ zone_init_internals(zone, j, nid, zone->present_pages); base-commit: 4bbf9020becbfd8fc2c3da790855b7042fad455b -- 2.25.1