On 2024/2/9 05:09, Andrew Morton wrote: > On Thu, 8 Feb 2024 02:32:54 +0000 chengming.zhou@xxxxxxxxx wrote: > >> From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> >> >> We have to invalidate any duplicate entry even when !zswap_enabled >> since zswap can be disabled anytime. If the folio store success before, >> then got dirtied again but zswap disabled, we won't invalidate the old >> duplicate entry in the zswap_store(). So later lru writeback may >> overwrite the new data in swapfile. >> >> ... >> >> --- a/mm/zswap.c >> +++ b/mm/zswap.c >> @@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio) >> if (folio_test_large(folio)) >> return false; >> >> - if (!zswap_enabled || !tree) >> + if (!tree) >> return false; >> >> /* >> @@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio) >> zswap_invalidate_entry(tree, dupentry); >> } >> spin_unlock(&tree->lock); >> + >> + if (!zswap_enabled) >> + return false; >> + >> objcg = get_obj_cgroup_from_folio(folio); >> if (objcg && !obj_cgroup_may_zswap(objcg)) { >> memcg = get_mem_cgroup_from_objcg(objcg); > > OK, thanks. > > I saw only one reject from mm-unstable patches. Your patch "mm/zswap: > make sure each swapfile always have zswap rb-tree" now does It's correct. Thanks! The other patch that includes optimization and cleanup is updated based on mm-unstable and just resend: https://lore.kernel.org/all/20240209044112.3883835-1-chengming.zhou@xxxxxxxxx/ > > --- a/mm/zswap.c~mm-zswap-make-sure-each-swapfile-always-have-zswap-rb-tree > +++ a/mm/zswap.c > @@ -1518,9 +1518,6 @@ bool zswap_store(struct folio *folio) > if (folio_test_large(folio)) > return false; > > - if (!tree) > - return false; > - > /* > * If this is a duplicate, it must be removed before attempting to store > * it, otherwise, if the store fails the old page won't be removed from > >