On 4/14/21 3:39 PM, Mel Gorman wrote: > Historically when freeing pages, free_one_page() assumed that callers > had IRQs disabled and the zone->lock could be acquired with spin_lock(). > This confuses the scope of what local_lock_irq is protecting and what > zone->lock is protecting in free_unref_page_list in particular. > > This patch uses spin_lock_irqsave() for the zone->lock in > free_one_page() instead of relying on callers to have disabled > IRQs. free_unref_page_commit() is changed to only deal with PCP pages > protected by the local lock. free_unref_page_list() then first frees > isolated pages to the buddy lists with free_one_page() and frees the rest > of the pages to the PCP via free_unref_page_commit(). The end result > is that free_one_page() is no longer depending on side-effects of > local_lock to be correct. > > Note that this may incur a performance penalty while memory hot-remove > is running but that is not a common operation. > > Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> A nit below: > @@ -3294,6 +3295,7 @@ void free_unref_page_list(struct list_head *list) > struct page *page, *next; > unsigned long flags, pfn; > int batch_count = 0; > + int migratetype; > > /* Prepare pages for freeing */ > list_for_each_entry_safe(page, next, list, lru) { > @@ -3301,15 +3303,28 @@ void free_unref_page_list(struct list_head *list) > if (!free_unref_page_prepare(page, pfn)) > list_del(&page->lru); > set_page_private(page, pfn); Should probably move this below so we don't set private for pages that then go through free_one_page()? Doesn't seem to be a bug, just unneccessary. > + > + /* > + * Free isolated pages directly to the allocator, see > + * comment in free_unref_page. > + */ > + migratetype = get_pcppage_migratetype(page); > + if (unlikely(migratetype >= MIGRATE_PCPTYPES)) { > + if (unlikely(is_migrate_isolate(migratetype))) { > + free_one_page(page_zone(page), page, pfn, 0, > + migratetype, FPI_NONE); > + list_del(&page->lru); > + } > + } > } > > local_lock_irqsave(&pagesets.lock, flags); > list_for_each_entry_safe(page, next, list, lru) { > - unsigned long pfn = page_private(page); > - > + pfn = page_private(page); > set_page_private(page, 0); > + migratetype = get_pcppage_migratetype(page); > trace_mm_page_free_batched(page); > - free_unref_page_commit(page, pfn); > + free_unref_page_commit(page, pfn, migratetype); > > /* > * Guard against excessive IRQ disabled times when we get >