Hey folks, Recently, I tested the zswap with memory reclaiming in the mainline (6.8) and found a memory corruption issue related to exclusive loads. root@**:/sys/fs/cgroup/zz# stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep stress: info: [31753] dispatching hogs: 0 cpu, 0 io, 5 vm, 0 hdd stress: FAIL: [31758] (522) memory corruption at: 0x7f347ed1a010 stress: FAIL: [31753] (394) <-- worker 31758 returned error 1 stress: WARN: [31753] (396) now reaping child worker processes stress: FAIL: [31753] (451) failed run completed in 14s 1. Test step(the frequency of memory reclaiming has been accelerated): ------------------------- a. set up the zswap, zram and cgroup V2 b. echo 0 > /sys/kernel/mm/lru_gen/enabled (Increase the probability of problems occurring) c. mkdir /sys/fs/cgroup/zz echo $$ > /sys/fs/cgroup/zz/cgroup.procs cd /sys/fs/cgroup/zz/ stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep e. in other shell: while :;do for i in {1..5};do echo 20g > /sys/fs/cgroup/zz/memory.reclaim & done;sleep 1;done 2. Root cause: -------------------------- With a small probability, the page fault will occur twice with the original pte, even if a new pte has been successfully set. Unfortunately, zswap_entry has been released during the first page fault with exclusive loads, so zswap_load will fail, and there is no corresponding data in swap space, memory corruption occurs. bpftrace -e'k:zswap_load {printf("%lld, %lld\n", ((struct page *)arg0)->private,nsecs)}' --include linux/mm_types.h > a.txt look up the same index: index nsecs 1318876, 8976040736819 1318876, 8976040746078 4123110, 8976234682970 4123110, 8976234689736 2268896, 8976660124792 2268896, 8976660130607 4634105, 8976662117938 4634105, 8976662127596 3. Solution Should we free zswap_entry in batches so that zswap_entry will be valid when the next page fault occurs with the original pte? It would be great if there are other better solutions.