On Wed, Sep 4, 2024 at 2:22 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > Hi Yosry, > > > > diff --git a/mm/swapfile.c b/mm/swapfile.c > > > > index c1638a009113..8ff58be40544 100644 > > > > --- a/mm/swapfile.c > > > > +++ b/mm/swapfile.c > > > > @@ -1514,6 +1514,8 @@ static bool __swap_entries_free(struct swap_info_struct *si, > > > > unlock_cluster_or_swap_info(si, ci); > > > > > > > > if (!has_cache) { > > > > + for (i = 0; i < nr; i++) > > > > + zswap_invalidate(swp_entry(si->type, offset + i)); > > > > spin_lock(&si->lock); > > > > swap_entry_range_free(si, entry, nr); > > > > spin_unlock(&si->lock); > > > > This fix from Barry have been applied for mm-unstable and it's looking good so far. > Kairui, Barry, any thoughts on this? Any preferences on how to make > sure zswap_invalidate() is being called in all possible swap freeing > paths? I have a set of patched that removed the si->lock around swap_entry_range_free, and the slot return cache is removed too. So we can move zswap_invalidate into it as locking or caching is no longer an issue, I will post it ASAP.