On Thu, Apr 04, 2019 at 05:38:24PM +0800, Linxu Fang wrote: > commit <342332e6a925> ("mm/page_alloc.c: introduce kernelcore=mirror > option") and series patches rewrote the calculation of node spanned > pages. > commit <e506b99696a2> (mem-hotplug: fix node spanned pages when we have a > movable node), but the current code still has problems, > when we have a node with only zone_movable and the node id is not zero, > the size of node spanned pages is double added. > That's because we have an empty normal zone, and zone_start_pfn or > zone_end_pfn is not between arch_zone_lowest_possible_pfn and > arch_zone_highest_possible_pfn, so we need to use clamp to constrain the > range just like the commit <96e907d13602> (bootmem: Reimplement > __absent_pages_in_range() using for_each_mem_pfn_range()). > > e.g. > Zone ranges: > DMA [mem 0x0000000000001000-0x0000000000ffffff] > DMA32 [mem 0x0000000001000000-0x00000000ffffffff] > Normal [mem 0x0000000100000000-0x000000023fffffff] > Movable zone start for each node > Node 0: 0x0000000100000000 > Node 1: 0x0000000140000000 > Early memory node ranges > node 0: [mem 0x0000000000001000-0x000000000009efff] > node 0: [mem 0x0000000000100000-0x00000000bffdffff] > node 0: [mem 0x0000000100000000-0x000000013fffffff] > node 1: [mem 0x0000000140000000-0x000000023fffffff] > > node 0 DMA spanned:0xfff present:0xf9e absent:0x61 > node 0 DMA32 spanned:0xff000 present:0xbefe0 absent:0x40020 > node 0 Normal spanned:0 present:0 absent:0 > node 0 Movable spanned:0x40000 present:0x40000 absent:0 > On node 0 totalpages(node_present_pages): 1048446 > node_spanned_pages:1310719 > node 1 DMA spanned:0 present:0 absent:0 > node 1 DMA32 spanned:0 present:0 absent:0 > node 1 Normal spanned:0x100000 present:0x100000 absent:0 > node 1 Movable spanned:0x100000 present:0x100000 absent:0 > On node 1 totalpages(node_present_pages): 2097152 > node_spanned_pages:2097152 > Memory: 6967796K/12582392K available (16388K kernel code, 3686K rwdata, > 4468K rodata, 2160K init, 10444K bss, 5614596K reserved, 0K > cma-reserved) > > It shows that the current memory of node 1 is double added. > After this patch, the problem is fixed. > > node 0 DMA spanned:0xfff present:0xf9e absent:0x61 > node 0 DMA32 spanned:0xff000 present:0xbefe0 absent:0x40020 > node 0 Normal spanned:0 present:0 absent:0 > node 0 Movable spanned:0x40000 present:0x40000 absent:0 > On node 0 totalpages(node_present_pages): 1048446 > node_spanned_pages:1310719 > node 1 DMA spanned:0 present:0 absent:0 > node 1 DMA32 spanned:0 present:0 absent:0 > node 1 Normal spanned:0 present:0 absent:0 > node 1 Movable spanned:0x100000 present:0x100000 absent:0 > On node 1 totalpages(node_present_pages): 1048576 > node_spanned_pages:1048576 > memory: 6967796K/8388088K available (16388K kernel code, 3686K rwdata, > 4468K rodata, 2160K init, 10444K bss, 1420292K reserved, 0K > cma-reserved) > > Signed-off-by: Linxu Fang <fanglinxu@xxxxxxxxxx> Uhmf, I have to confess that this whole thing about kernelcore and movablecore makes me head spin. I agree that clamping the range to the node's start_pfn/end_pfn is the right thing to do. On the other hand, I cannot figure out why these two statements from zone_spanned_pages_in_node() do not help in setting the right values. *zone_end_pfn = min(*zone_end_pfn, node_end_pfn); *zone_start_pfn = max(*zone_start_pfn, node_start_pfn); If I take one of your examples: Node 0: node_start_pfn=1 node_end_pfn=2822144 DMA zone_low=1 zone_high=4096 DMA32 zone_low=4096 zone_high=1048576 Normal zone_low=1048576 zone_high=7942144 Movable zone_low=0 zone_high=0 *zone_end_pfn should be set to 2822144, and so zone_end_pfn - zone_start_pfn should return the right value? Or is it because we have the wrong values before calling adjust_zone_range_for_zone_movable() and the whole thing gets messed up there? Please, note that the patch looks correct to me, I just want to understand why those two statements do not help here. -- Oscar Salvador SUSE L3