Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 22, 2024 at 6:35 PM Zhongkun He
<hezhongkun.hzk@xxxxxxxxxxxxx> wrote:
>
> On Sat, Mar 23, 2024 at 3:35 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> >
> > On Thu, Mar 21, 2024 at 8:04 PM Zhongkun He
> > <hezhongkun.hzk@xxxxxxxxxxxxx> wrote:
> > >
> > > On Thu, Mar 21, 2024 at 5:29 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> > > >
> > > > On 2024/3/21 14:36, Zhongkun He wrote:
> > > > > On Thu, Mar 21, 2024 at 1:24 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> > > > >>
> > > > >> On 2024/3/21 13:09, Zhongkun He wrote:
> > > > >>> On Thu, Mar 21, 2024 at 12:42 PM Chengming Zhou
> > > > >>> <chengming.zhou@xxxxxxxxx> wrote:
> > > > >>>>
> > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote:
> > > > >>>>> Hey folks,
> > > > >>>>>
> > > > >>>>> Recently, I tested the zswap with memory reclaiming in the mainline
> > > > >>>>> (6.8) and found a memory corruption issue related to exclusive loads.
> > > > >>>>
> > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
> > > > >>>> This fix avoids concurrent swapin using the same swap entry.
> > > > >>>>
> > > > >>>
> > > > >>> Yes, This fix avoids concurrent swapin from different cpu, but the
> > > > >>> reported issue occurs
> > > > >>> on the same cpu.
> > > > >>
> > > > >> I think you may misunderstand the race description in this fix changelog,
> > > > >> the CPU0 and CPU1 just mean two concurrent threads, not real two CPUs.
> > > > >>
> > > > >> Could you verify if the problem still exists with this fix?
> > > > >
> > > > > Yes,I'm sure the problem still exists with this patch.
> > > > > There is some debug info, not mainline.
> > > > >
> > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n%s",
> > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --include
> > > > > linux/mm_types.h
> > > >
> > > > Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap backends,
> > > > which now include zram, ramdisk, pmem, nvdimm.
> > >
> > > Yes.
> > >
> > > >
> > > > It maybe not good to use zswap on these swap backends?
> > > >
> > > > The problem here is the page fault handler tries to skip swapcache to
> > > > swapin the folio (swap entry count == 1), but then it can't install folio
> > > > to pte entry since some changes happened such as concurrent fork of entry.
> > > >
> > >
> > > The first page fault returned VM_FAULT_RETRY because
> > > folio_lock_or_retry() failed.
> >
>
> Hi Yosry,
>
> > How so? The folio is newly allocated and not visible to any other
> > threads or CPUs. swap_read_folio() unlocks it and then returns and we
> > immediately try to lock it again with folio_lock_or_retry(). How does
> > this fail?
>
> Haha, it makes me very confused. Based on the steps to reproduce the problem,
> I think the page is locked by shrink_folio_list(). Please see the
> following situation.

I missed the call to folio_add_lru() before swap_read_folio(). Reclaim
would be able to lock the folio in this case once it's unlocked by
swap_read_folio().

Thanks for elaborating.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux