On Thu, Aug 10, 2023 at 11:33:32AM +0100, Ryan Roberts wrote: > +void folios_put_refs(struct folio_range *folios, int nr) > +{ > + int i; > + LIST_HEAD(pages_to_free); > + struct lruvec *lruvec = NULL; > + unsigned long flags = 0; > + unsigned int lock_batch; > + > + for (i = 0; i < nr; i++) { > + struct folio *folio = page_folio(folios[i].start); > + int refs = folios[i].end - folios[i].start; > + > + /* > + * Make sure the IRQ-safe lock-holding time does not get > + * excessive with a continuous string of pages from the > + * same lruvec. The lock is held only if lruvec != NULL. > + */ > + if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { > + unlock_page_lruvec_irqrestore(lruvec, flags); > + lruvec = NULL; > + } > + > + if (is_huge_zero_page(&folio->page)) > + continue; > + > + if (folio_is_zone_device(folio)) { > + if (lruvec) { > + unlock_page_lruvec_irqrestore(lruvec, flags); > + lruvec = NULL; > + } > + if (put_devmap_managed_page(&folio->page)) > + continue; > + if (folio_put_testzero(folio)) We're only putting one ref for the zone_device folios? Surely this should be ref_sub_and_test like below? > + free_zone_device_page(&folio->page); > + continue; > + } > + > + if (!folio_ref_sub_and_test(folio, refs)) > + continue; > + > + if (folio_test_large(folio)) { > + if (lruvec) { > + unlock_page_lruvec_irqrestore(lruvec, flags); > + lruvec = NULL; > + } > + __folio_put_large(folio); > + continue; > + } > + > + if (folio_test_lru(folio)) { > + struct lruvec *prev_lruvec = lruvec; > + > + lruvec = folio_lruvec_relock_irqsave(folio, lruvec, > + &flags); > + if (prev_lruvec != lruvec) > + lock_batch = 0; > + > + lruvec_del_folio(lruvec, folio); > + __folio_clear_lru_flags(folio); > + } > + > + /* > + * In rare cases, when truncation or holepunching raced with > + * munlock after VM_LOCKED was cleared, Mlocked may still be > + * found set here. This does not indicate a problem, unless > + * "unevictable_pgs_cleared" appears worryingly large. > + */ > + if (unlikely(folio_test_mlocked(folio))) { > + __folio_clear_mlocked(folio); > + zone_stat_sub_folio(folio, NR_MLOCK); > + count_vm_event(UNEVICTABLE_PGCLEARED); > + } You'll be glad to know I've factored out a nice little helper for that.