Currently, page_outside_zone_boundaries() takes zone's span_seqlock when reading zone_start_pfn and spanned_pages so those values are stable vs memory hotplug operations. move_pfn_range_to_zone() and remove_pfn_range_from_zone(), which are the functions that can change zone's values are serialized by mem_hotplug_lock by mem_hotplug_{begin,done}, so we can just use {get,put}_online_mems() on the readers. This will allow us to completely kill span_seqlock lock as no users will remain after this series. Signed-off-by: Oscar Salvador <osalvador@xxxxxxx> --- mm/page_alloc.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aaa1655cf682..296cb00802b4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -582,17 +582,15 @@ void set_pageblock_migratetype(struct page *page, int migratetype) static int page_outside_zone_boundaries(struct zone *zone, struct page *page) { int ret = 0; - unsigned seq; unsigned long pfn = page_to_pfn(page); unsigned long sp, start_pfn; - do { - seq = zone_span_seqbegin(zone); - start_pfn = zone->zone_start_pfn; - sp = zone->spanned_pages; - if (!zone_spans_pfn(zone, pfn)) - ret = 1; - } while (zone_span_seqretry(zone, seq)); + get_online_mems(); + start_pfn = zone->zone_start_pfn; + sp = zone->spanned_pages; + if (!zone_spans_pfn(zone, pfn)) + ret = 1; + put_online_mems(); if (ret) pr_err("page 0x%lx outside node %d zone %s [ 0x%lx - 0x%lx ]\n", -- 2.16.3