On Tue, Apr 02, 2019 at 04:57:11PM +0200, Oscar Salvador wrote: > On Tue, Apr 02, 2019 at 12:11:16PM +0800, Linxu Fang wrote: > > commit <342332e6a925> ("mm/page_alloc.c: introduce kernelcore=mirror > > option") and series patches rewrote the calculation of node spanned > > pages. > > commit <e506b99696a2> (mem-hotplug: fix node spanned pages when we have a > > movable node), but the current code still has problems, > > when we have a node with only zone_movable and the node id is not zero, > > the size of node spanned pages is double added. > > That's because we have an empty normal zone, and zone_start_pfn or > > zone_end_pfn is not between arch_zone_lowest_possible_pfn and > > arch_zone_highest_possible_pfn, so we need to use clamp to constrain the > > range just like the commit <96e907d13602> (bootmem: Reimplement > > __absent_pages_in_range() using for_each_mem_pfn_range()). > > So, let me see if I understood this correctly: > > When calling zone_spanned_pages_in_node() for any node which is not node 0, > > > *zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type]; > > *zone_end_pfn = arch_zone_highest_possible_pfn[zone_type]; > > will actually set zone_start_pfn/zone_end_pfn to the values from node0's > ZONE_NORMAL? Of course, I meant when calling it being zone_type == ZONE_NORMAL. -- Oscar Salvador SUSE L3