On Sun, Oct 06, 2019 at 10:56:43AM +0200, David Hildenbrand wrote: > With shrink_pgdat_span() out of the way, we now always have a valid > zone. > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Oscar Salvador <osalvador@xxxxxxx> > Cc: David Hildenbrand <david@xxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> > Cc: Dan Williams <dan.j.williams@xxxxxxxxx> > Cc: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> > --- > mm/memory_hotplug.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index bf5173e7913d..f294918f7211 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -337,7 +337,7 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, > if (unlikely(pfn_to_nid(start_pfn) != nid)) > continue; > > - if (zone && zone != page_zone(pfn_to_page(start_pfn))) > + if (zone != page_zone(pfn_to_page(start_pfn))) > continue; > > return start_pfn; > @@ -362,7 +362,7 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, > if (unlikely(pfn_to_nid(pfn) != nid)) > continue; > > - if (zone && zone != page_zone(pfn_to_page(pfn))) > + if (zone != page_zone(pfn_to_page(pfn))) > continue; > > return pfn; > -- > 2.21.0 > -- Oscar Salvador SUSE L3