On Thu, Mar 21, 2024 at 1:24 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote: > > On 2024/3/21 13:09, Zhongkun He wrote: > > On Thu, Mar 21, 2024 at 12:42 PM Chengming Zhou > > <chengming.zhou@xxxxxxxxx> wrote: > >> > >> On 2024/3/21 12:34, Zhongkun He wrote: > >>> Hey folks, > >>> > >>> Recently, I tested the zswap with memory reclaiming in the mainline > >>> (6.8) and found a memory corruption issue related to exclusive loads. > >> > >> Is this fix included? 13ddaf26be32 ("mm/swap: fix race when skipping swapcache") > >> This fix avoids concurrent swapin using the same swap entry. > >> > > > > Yes, This fix avoids concurrent swapin from different cpu, but the > > reported issue occurs > > on the same cpu. > > I think you may misunderstand the race description in this fix changelog, > the CPU0 and CPU1 just mean two concurrent threads, not real two CPUs. > > Could you verify if the problem still exists with this fix? Yes,I'm sure the problem still exists with this patch. There is some debug info, not mainline. bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n%s", ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --include linux/mm_types.h offset nsecs tid pid cpu 2140659, 595771411052,15045,15045,6 swap_readpage+1 do_swap_page+2135 handle_mm_fault+2426 do_user_addr_fault+462 do_page_fault+48 async_page_fault+62 offset nsecs tid pid cpu 2140659, 595771424445,15045,15045,6 swap_readpage+1 do_swap_page+2135 handle_mm_fault+2426 do_user_addr_fault+462 do_page_fault+48 async_page_fault+62 ------------------------------- There are two page faults with the same tid and offset in 13393 nsecs. > > > > > Thanks. > > > >> Thanks. > >> > >>> > >>> > >>> root@**:/sys/fs/cgroup/zz# stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep > >>> stress: info: [31753] dispatching hogs: 0 cpu, 0 io, 5 vm, 0 hdd > >>> stress: FAIL: [31758] (522) memory corruption at: 0x7f347ed1a010 > >>> stress: FAIL: [31753] (394) <-- worker 31758 returned error 1 > >>> stress: WARN: [31753] (396) now reaping child worker processes > >>> stress: FAIL: [31753] (451) failed run completed in 14s > >>> > >>> > >>> 1. Test step(the frequency of memory reclaiming has been accelerated): > >>> ------------------------- > >>> a. set up the zswap, zram and cgroup V2 > >>> b. echo 0 > /sys/kernel/mm/lru_gen/enabled > >>> (Increase the probability of problems occurring) > >>> c. mkdir /sys/fs/cgroup/zz > >>> echo $$ > /sys/fs/cgroup/zz/cgroup.procs > >>> cd /sys/fs/cgroup/zz/ > >>> stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep > >>> > >>> e. in other shell: > >>> while :;do for i in {1..5};do echo 20g > > >>> /sys/fs/cgroup/zz/memory.reclaim & done;sleep 1;done > >>> > >>> 2. Root cause: > >>> -------------------------- > >>> With a small probability, the page fault will occur twice with the > >>> original pte, even if a new pte has been successfully set. > >>> Unfortunately, zswap_entry has been released during the first page fault > >>> with exclusive loads, so zswap_load will fail, and there is no corresponding > >>> data in swap space, memory corruption occurs. > >>> > >>> bpftrace -e'k:zswap_load {printf("%lld, %lld\n", ((struct page > >>> *)arg0)->private,nsecs)}' > >>> --include linux/mm_types.h > a.txt > >>> > >>> look up the same index: > >>> > >>> index nsecs > >>> 1318876, 8976040736819 > >>> 1318876, 8976040746078 > >>> > >>> 4123110, 8976234682970 > >>> 4123110, 8976234689736 > >>> > >>> 2268896, 8976660124792 > >>> 2268896, 8976660130607 > >>> > >>> 4634105, 8976662117938 > >>> 4634105, 8976662127596 > >>> > >>> 3. Solution > >>> > >>> Should we free zswap_entry in batches so that zswap_entry will be > >>> valid when the next page fault occurs with the > >>> original pte? It would be great if there are other better solutions. > >>> > >>