Yosry Ahmed <yosryahmed@xxxxxxxxxx> writes: > +Ying > > On Mon, Nov 13, 2023 at 5:06 AM Zhongkun He > <hezhongkun.hzk@xxxxxxxxxxxxx> wrote: >> >> I recently found two scenarios where zswap entry could not be >> released, which will cause shrink_worker and active recycling >> to fail. >> 1)The swap entry has been freed, but cached in swap_slots_cache, >> no swap cache and swapcount=0. >> 2)When the option zswap_exclusive_loads_enabled disabled and >> zswap_load completed(page in swap_cache and swapcount = 0). > > For case (1), I think a cleaner solution would be to move the > zswap_invalidate() call from swap_range_free() (which is called after > the cached slots are freed) to __swap_entry_free_locked() if the usage > goes to 0. I actually think conceptually this makes not just for > zswap_invalidate(), but also for the arch call, memcg uncharging, etc. > Slots caching is a swapfile optimization that should be internal to > swapfile code. Once a swap entry is freed (i.e. swap count is 0 AND > not in the swap cache), all the hooks should be called (memcg, zswap, > arch, ..) as the swap entry is effectively freed. The fact that > swapfile code internally batches and caches slots should be > transparent to other parts of MM. I am not sure if the calls can just > be moved or if there are underlying assumptions in the implementation > that would be broken, but it feels like the right thing to do. This sounds reasonable for me. Just don't forget to change other swap_range_free() callers too. -- Best Regards, Huang, Ying