On Fri, 2025-03-14 at 05:05 +0800, Kemeng Shi wrote: > Replace cluster_swap_free_nr() with swap_entries_put_[map/cache]() to > remove repeat code and leverage batch-remove for entries with last flag. > > Signed-off-by: Kemeng Shi <shikemeng@xxxxxxxxxxxxxxx> > --- > mm/swapfile.c | 21 ++------------------- > 1 file changed, 2 insertions(+), 19 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 343b34eb2a81..c27cf09d84a6 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1570,21 +1570,6 @@ static void swap_entries_free(struct swap_info_struct *si, > __swap_entries_free(si, ci, entry, nr_pages); > } > > -static void cluster_swap_free_nr(struct swap_info_struct *si, > - unsigned long offset, int nr_pages, > - unsigned char usage) > -{ > - struct swap_cluster_info *ci; > - unsigned long end = offset + nr_pages; > - > - ci = lock_cluster(si, offset); > - do { > - swap_entry_put_locked(si, ci, swp_entry(si->type, offset), > - usage); > - } while (++offset < end); > - unlock_cluster(ci); > -} > - > /* > * Caller has made sure that the swap device corresponding to entry > * is still around or has not been recycled. > @@ -1601,7 +1586,7 @@ void swap_free_nr(swp_entry_t entry, int nr_pages) > > while (nr_pages) { > nr = min_t(int, nr_pages, SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER); > - cluster_swap_free_nr(sis, offset, nr, 1); > + swap_entries_put_map(sis, swp_entry(sis->type, offset), nr); > offset += nr; > nr_pages -= nr; > } > @@ -3632,9 +3617,7 @@ int swapcache_prepare(swp_entry_t entry, int nr) > > void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr) > { > - unsigned long offset = swp_offset(entry); > - > - cluster_swap_free_nr(si, offset, nr, SWAP_HAS_CACHE); > + swap_entries_put_cache(si, entry, nr); swap_entries_put_cache() assumes nr does not cross cluster boundary as we only lock the cluster associated with the beginning entry. Current callers to swapcache_clear() like do_swap_page() and shmem_swapin_folio, call swapcache_clear() only for pages within a folio so we do fall in a cluster so we are okay. Perhaps we should document with a comment that caller to swapcache_clear() should use the function only for pages in a folio so the pages don't cross clusters for future users of swapcache_clear(). Otherwise the patch looks good. Tim > } > > struct swap_info_struct *swp_swap_info(swp_entry_t entry)