On Fri, 2025-03-14 at 05:05 +0800, Kemeng Shi wrote: > 1. Factor out general swap_entries_put_map() helper to drop entries belong > to one cluster. If entries are last map, free entries in batch, otherwise > put entries with cluster lock acquired and released only once. > 2. Iterate and call swap_entries_put_map() for each cluster in > swap_entries_put_nr() to leverage batch-remove for last map belong to one > cluster and reduce lock acquire/release in fallback case. > 3. As swap_entries_put_nr() won't handle SWAP_HSA_CACHE drop, rename it to > swap_entries_put_map_nr(). > > Signed-off-by: Kemeng Shi <shikemeng@xxxxxxxxxxxxxxx> > --- > mm/swapfile.c | 58 +++++++++++++++++++++++++-------------------------- > 1 file changed, 29 insertions(+), 29 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 2d0f5d630211..ebac9ff74ba7 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1473,25 +1473,10 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) > return NULL; > } > > -static unsigned char swap_entry_put(struct swap_info_struct *si, > - swp_entry_t entry) Need to update comment in free_swap_and_cache_nr() now that swap_entry_put() goes away. * the swap once per folio in the common case. If we do * swap_entry_put() and __try_to_reclaim_swap() in the same loop, the * latter will get a reference and lock the folio for every individual ... > +static bool swap_entries_put_map(struct swap_info_struct *si, > + swp_entry_t entry, int nr) > { > - struct swap_cluster_info *ci; > unsigned long offset = swp_offset(entry); > - unsigned char usage; > - > - ci = lock_cluster(si, offset); > - usage = swap_entry_put_locked(si, ci, entry, 1); > - unlock_cluster(ci); > - > - return usage; > -} > -