> will actually set zone_start_pfn/zone_end_pfn to the values from node0's > ZONE_NORMAL? > So we use clamp to actually check if such values fall within what node1's > memory spans, and ignore them otherwise? That's right. Normally, zone_start_pfn/zone_end_pfn has the same value for all nodes. Let's look at another example, which is obtained by adding some debugging information. e.g. Zone ranges: DMA [mem 0x0000000000001000-0x0000000000ffffff] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Normal [mem 0x0000000100000000-0x0000000792ffffff] Movable zone start for each node Node 0: 0x0000000100000000 Node 1: 0x00000002b1000000 Node 2: 0x0000000522000000 Early memory node ranges node 0: [mem 0x0000000000001000-0x000000000009efff] node 0: [mem 0x0000000000100000-0x00000000bffdefff] node 0: [mem 0x0000000100000000-0x00000002b0ffffff] node 1: [mem 0x00000002b1000000-0x0000000521ffffff] node 2: [mem 0x0000000522000000-0x0000000792ffffff] Node 0: node_start_pfn=1 node_end_pfn=2822144 DMA zone_low=1 zone_high=4096 DMA32 zone_low=4096 zone_high=1048576 Normal zone_low=1048576 zone_high=7942144 Movable zone_low=0 zone_high=0 Node 1: node_start_pfn=2822144 node_end_pfn=5382144 DMA zone_low=1 zone_high=4096 DMA32 zone_low=4096 zone_high=1048576 Normal zone_low=1048576 zone_high=7942144 Movable zone_low=0 zone_high=0 Node 2: node_start_pfn=5382144 node_end_pfn=7942144 DMA zone_low=1 zone_high=4096 DMA32 zone_low=4096 zone_high=1048576 Normal zone_low=1048576 zone_high=7942144 Movable zone_low=0 zone_high=0 Before this patch, zone_start_pfn/zone_end_pfn in node 0,1,2 is the same: DMA zone_start_pfn:1 zone_end_pfn:4096 DMA32 zone_start_pfn:4096 zone_end_pfn:1048576 Normal zone_start_pfn:1048576 zone_end_pfn:7942144 Movable zone_start_pfn:0 zone_end_pfn:0 spaned pages resuelt: node 0: DMA spanned:4095 DMA32 spanned:1044480 Normal spanned:0 Movable spanned:1773568 totalpages:2559869 node 1: DMA spanned:0 DMA32 spanned:0 Normal spanned:2560000 Movable spanned:2560000 totalpages:5120000 node 2: DMA spanned:0 DMA32 spanned:0 Normal spanned:2560000 Movable spanned:2560000 totalpages:5120000 After this patch: node 0: DMA zone_start_pfn:1 zone_end_pfn:4096 spanned:4095 DMA32 zone_start_pfn:4096 zone_end_pfn:1048576 spanned:1044480 Normal zone_start_pfn:1048576 zone_end_pfn:2822144 spanned:0 Movable zone_start_pfn:0 zone_end_pfn:0 spanned:1773568 totalpages:2559869 node 1: DMA zone_start_pfn:4096 zone_end_pfn:4096 spanned:0 DMA32 zone_start_pfn:1048576 zone_end_pfn:1048576 spanned:0 Normal zone_start_pfn:2822144 zone_end_pfn:5382144 spanned:0 Movable zone_start_pfn:0 zone_end_pfn:0 spanned:2560000 totalpages:2560000 node 2: DMA zone_start_pfn:4096 zone_end_pfn:4096 spanned:0 DMA32 zone_start_pfn:1048576 zone_end_pfn:1048576 spanned:0 Normal zone_start_pfn:5382144 zone_end_pfn:7942144 spanned:0 Movable zone_start_pfn:0 zone_end_pfn:0 spanned:2560000 totalpages:2560000 It is easy to construct such a scenario by configuring kernelcore=mirror in a multi-NUMA machine without full mirrored memory. Of course, it can be a machine without any mirrored memory. A great difference can be observed by startup information and viewing /proc/pagetypeinfo. On earlier kernel versions, such BUGs will directly double the memory of some nodes. Although these redundant memory exists in the form of reserved memory, this should not be expected.