The patch below does not apply to the 6.6-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to <stable@xxxxxxxxxxxxxxx>. To reproduce the conflict and resubmit, you may use the following commands: git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.6.y git checkout FETCH_HEAD git cherry-pick -x 678e54d4bb9a4822f8ae99690ac131c5d490cdb1 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to '<stable@xxxxxxxxxxxxxxx>' --in-reply-to '2024022622-resent-ripeness-43f1@gregkh' --subject-prefix 'PATCH 6.6.y' HEAD^.. Possible dependencies: 678e54d4bb9a ("mm/zswap: invalidate duplicate entry when !zswap_enabled") a65b0e7607cc ("zswap: make shrinking memcg-aware") ddc1a5cbc05d ("mempolicy: alloc_pages_mpol() for NUMA policy without vma") 23e4883248f0 ("mm: add page_rmappable_folio() wrapper") c36f6e6dff4d ("mempolicy trivia: slightly more consistent naming") 7f1ee4e20708 ("mempolicy trivia: delete those ancient pr_debug()s") 1cb5d11a370f ("mempolicy: fix migrate_pages(2) syscall return nr_failed") 3657fdc2451a ("mm: move vma_policy() and anon_vma_name() decls to mm_types.h") 3022fd7af960 ("shmem: _add_to_page_cache() before shmem_inode_acct_blocks()") 054a9f7ccd0a ("shmem: move memcg charge out of shmem_add_to_page_cache()") 4199f51a7eb2 ("shmem: shmem_acct_blocks() and shmem_inode_acct_blocks()") e3e1a5067fd2 ("shmem: remove vma arg from shmem_get_folio_gfp()") 75c70128a673 ("mm: mempolicy: make mpol_misplaced() to take a folio") cda6d93672ac ("mm: memory: make numa_migrate_prep() to take a folio") 6695cf68b15c ("mm: memory: use a folio in do_numa_page()") 667ffc31aa95 ("mm: huge_memory: use a folio in do_huge_pmd_numa_page()") 73eab3ca481e ("mm: migrate: convert migrate_misplaced_page() to migrate_misplaced_folio()") 2ac9e99f3b21 ("mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio()") thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From 678e54d4bb9a4822f8ae99690ac131c5d490cdb1 Mon Sep 17 00:00:00 2001 From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> Date: Thu, 8 Feb 2024 02:32:54 +0000 Subject: [PATCH] mm/zswap: invalidate duplicate entry when !zswap_enabled We have to invalidate any duplicate entry even when !zswap_enabled since zswap can be disabled anytime. If the folio store success before, then got dirtied again but zswap disabled, we won't invalidate the old duplicate entry in the zswap_store(). So later lru writeback may overwrite the new data in swapfile. Link: https://lkml.kernel.org/r/20240208023254.3873823-1-chengming.zhou@xxxxxxxxx Fixes: 42c06a0e8ebe ("mm: kill frontswap") Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Nhat Pham <nphamcs@xxxxxxxxx> Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> diff --git a/mm/zswap.c b/mm/zswap.c index 36903d938c15..db4625af65fb 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1518,7 +1518,7 @@ bool zswap_store(struct folio *folio) if (folio_test_large(folio)) return false; - if (!zswap_enabled || !tree) + if (!tree) return false; /* @@ -1533,6 +1533,10 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } spin_unlock(&tree->lock); + + if (!zswap_enabled) + return false; + objcg = get_obj_cgroup_from_folio(folio); if (objcg && !obj_cgroup_may_zswap(objcg)) { memcg = get_mem_cgroup_from_objcg(objcg);