free_unref_folios() can now handle non-hugetlb large folios, so keep normal large folios in the batch. hugetlb folios still need to be handled specially. I believe that folios freed using put_pages_list() cannot be accounted to a memcg (or the small folios would trip the "page still charged to cgroup" warning), but put an assertion in to check that. Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> --- mm/swap.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index f72364e92d5f..4643e0d53124 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -158,10 +158,11 @@ void put_pages_list(struct list_head *pages) list_for_each_entry_safe(folio, next, pages, lru) { if (!folio_put_testzero(folio)) continue; - if (folio_test_large(folio)) { - __folio_put_large(folio); + if (folio_test_hugetlb(folio)) { + free_huge_folio(folio); continue; } + VM_BUG_ON_FOLIO(folio_memcg(folio), folio); /* LRU flag must be clear because it's passed using the lru */ if (folio_batch_add(&fbatch, folio) > 0) continue; -- 2.43.0