My recent change to put_pages_list() dereferences folio->lru.next after returning the folio to the page allocator. Usually this is now on the pcp list with other free folios, so we try to free an already-free folio. This only happens with lists that have more than 15 entries, so it wasn't immediately discovered. Revert to using list_for_each_safe() so we dereference lru.next before disposing of the folio. Reported-by: "Borah, Chaitanya Kumar" <chaitanya.kumar.borah@xxxxxxxxx> Fixes: 24835f899c01 (mm: use free_unref_folios() in put_pages_list()) Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> --- mm/swap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index a910af21ba68..1d4b7713605d 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -139,10 +139,10 @@ EXPORT_SYMBOL(__folio_put); void put_pages_list(struct list_head *pages) { struct folio_batch fbatch; - struct folio *folio; + struct folio *folio, *next; folio_batch_init(&fbatch); - list_for_each_entry(folio, pages, lru) { + list_for_each_entry_safe(folio, next, pages, lru) { if (!folio_put_testzero(folio)) continue; if (folio_test_hugetlb(folio)) { -- 2.43.0