On 04.02.20 15:25, Baoquan He wrote: > On 10/06/19 at 10:56am, David Hildenbrand wrote: >> If we have holes, the holes will automatically get detected and removed >> once we remove the next bigger/smaller section. The extra checks can >> go. >> >> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >> Cc: Oscar Salvador <osalvador@xxxxxxx> >> Cc: Michal Hocko <mhocko@xxxxxxxx> >> Cc: David Hildenbrand <david@xxxxxxxxxx> >> Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> >> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> >> Cc: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> >> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> >> --- >> mm/memory_hotplug.c | 34 +++++++--------------------------- >> 1 file changed, 7 insertions(+), 27 deletions(-) >> >> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >> index f294918f7211..8dafa1ba8d9f 100644 >> --- a/mm/memory_hotplug.c >> +++ b/mm/memory_hotplug.c >> @@ -393,6 +393,9 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, >> if (pfn) { >> zone->zone_start_pfn = pfn; >> zone->spanned_pages = zone_end_pfn - pfn; >> + } else { >> + zone->zone_start_pfn = 0; >> + zone->spanned_pages = 0; >> } >> } else if (zone_end_pfn == end_pfn) { >> /* >> @@ -405,34 +408,11 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, >> start_pfn); >> if (pfn) >> zone->spanned_pages = pfn - zone_start_pfn + 1; >> + else { >> + zone->zone_start_pfn = 0; >> + zone->spanned_pages = 0; > > Thinking in which case (zone_start_pfn != start_pfn) and it comes here. Could only happen in case the zone_start_pfn would have been "out of the zone already". If you ask me: unlikely :) This change at least maintains the same result as before (where the all-holes check would have caught it). -- Thanks, David / dhildenb