Instead of calling put_page() one page at a time, pop pages off the list if there are other refcounts and pass the remainder to free_unref_page_list(). This should be a speed improvement, but I have no measurements to support that. It's also not very widely used today, so I can't say I've really tested it. I'm only bothering with this patch because I'd like the IOMMU code to use it https://lore.kernel.org/lkml/20210930162043.3111119-1-willy@xxxxxxxxxxxxx/ Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> --- mm/swap.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index af3cad4e5378..f6b38398fa6f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -139,13 +139,14 @@ EXPORT_SYMBOL(__put_page); */ void put_pages_list(struct list_head *pages) { - while (!list_empty(pages)) { - struct page *victim; + struct page *page, *next; - victim = lru_to_page(pages); - list_del(&victim->lru); - put_page(victim); + list_for_each_entry_safe(page, next, pages, lru) { + if (!put_page_testzero(page)) + list_del(&page->lru); } + + free_unref_page_list(pages); } EXPORT_SYMBOL(put_pages_list); -- 2.32.0