On 5/31/21 3:09 PM, Oscar Salvador wrote: > Currently, memory-hotplug code takes zone's span_writelock > and pgdat's resize_lock when resizing the node/zone's spanned > pages via {move_pfn_range_to_zone(),remove_pfn_range_from_zone()} > and when resizing node and zone's present pages via > adjust_present_page_count(). > > These locks are also taken during the initialization of the system > at boot time, where it protects parallel struct page initialization, > but they should not really be needed in memory-hotplug where all > operations are a) synchronized on device level and b) serialized by > the mem_hotplug_lock lock. > > Signed-off-by: Oscar Salvador <osalvador@xxxxxxx> > Acked-by: David Hildenbrand <david@xxxxxxxxxx> > --- > mm/memory_hotplug.c | 11 ----------- > 1 file changed, 11 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 075b34803fec..9edbc57055bf 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -329,7 +329,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > unsigned long pfn; > int nid = zone_to_nid(zone); > > - zone_span_writelock(zone); > if (zone->zone_start_pfn == start_pfn) { > /* > * If the section is smallest section in the zone, it need > @@ -362,7 +361,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > zone->spanned_pages = 0; > } > } > - zone_span_writeunlock(zone); > } > > static void update_pgdat_span(struct pglist_data *pgdat) > @@ -424,10 +422,8 @@ void __ref remove_pfn_range_from_zone(struct zone *zone, > > clear_zone_contiguous(zone); > > - pgdat_resize_lock(zone->zone_pgdat, &flags); > shrink_zone_span(zone, start_pfn, start_pfn + nr_pages); > update_pgdat_span(pgdat); > - pgdat_resize_unlock(zone->zone_pgdat, &flags); > > set_zone_contiguous(zone); > } > @@ -638,15 +634,10 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > > clear_zone_contiguous(zone); > > - /* TODO Huh pgdat is irqsave while zone is not. It used to be like that before */ > - pgdat_resize_lock(pgdat, &flags); > - zone_span_writelock(zone); > if (zone_is_empty(zone)) > init_currently_empty_zone(zone, start_pfn, nr_pages); > resize_zone_range(zone, start_pfn, nr_pages); > - zone_span_writeunlock(zone); > resize_pgdat_range(pgdat, start_pfn, nr_pages); > - pgdat_resize_unlock(pgdat, &flags); > > /* > * Subsection population requires care in pfn_to_online_page(). > @@ -739,9 +730,7 @@ void adjust_present_page_count(struct zone *zone, long nr_pages) > unsigned long flags; > > zone->present_pages += nr_pages; > - pgdat_resize_lock(zone->zone_pgdat, &flags); > zone->zone_pgdat->node_present_pages += nr_pages; > - pgdat_resize_unlock(zone->zone_pgdat, &flags); > } > > int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, > Should also just drop zone_span_write[lock|unlock]() helpers as there are no users left ?