Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 23, 2024 at 6:52 PM Chris Li <chrisl@xxxxxxxxxx> wrote:
>
> On Fri, Mar 22, 2024 at 6:35 PM Zhongkun He
> <hezhongkun.hzk@xxxxxxxxxxxxx> wrote:
> >
> > On Sat, Mar 23, 2024 at 3:35 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > >
> > > On Thu, Mar 21, 2024 at 8:04 PM Zhongkun He
> > > <hezhongkun.hzk@xxxxxxxxxxxxx> wrote:
> > > >
> > > > On Thu, Mar 21, 2024 at 5:29 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> > > > >
> > > > > On 2024/3/21 14:36, Zhongkun He wrote:
> > > > > > On Thu, Mar 21, 2024 at 1:24 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> > > > > >>
> > > > > >> On 2024/3/21 13:09, Zhongkun He wrote:
> > > > > >>> On Thu, Mar 21, 2024 at 12:42 PM Chengming Zhou
> > > > > >>> <chengming.zhou@xxxxxxxxx> wrote:
> > > > > >>>>
> > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote:
> > > > > >>>>> Hey folks,
> > > > > >>>>>
> > > > > >>>>> Recently, I tested the zswap with memory reclaiming in the mainline
> > > > > >>>>> (6.8) and found a memory corruption issue related to exclusive loads.
> > > > > >>>>
> > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
> > > > > >>>> This fix avoids concurrent swapin using the same swap entry.
> > > > > >>>>
> > > > > >>>
> > > > > >>> Yes, This fix avoids concurrent swapin from different cpu, but the
> > > > > >>> reported issue occurs
> > > > > >>> on the same cpu.
> > > > > >>
> > > > > >> I think you may misunderstand the race description in this fix changelog,
> > > > > >> the CPU0 and CPU1 just mean two concurrent threads, not real two CPUs.
> > > > > >>
> > > > > >> Could you verify if the problem still exists with this fix?
> > > > > >
> > > > > > Yes,I'm sure the problem still exists with this patch.
> > > > > > There is some debug info, not mainline.
> > > > > >
> > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n%s",
> > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --include
> > > > > > linux/mm_types.h
> > > > >
> > > > > Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap backends,
> > > > > which now include zram, ramdisk, pmem, nvdimm.
> > > >
> > > > Yes.
> > > >
> > > > >
> > > > > It maybe not good to use zswap on these swap backends?
> > > > >
> > > > > The problem here is the page fault handler tries to skip swapcache to
> > > > > swapin the folio (swap entry count == 1), but then it can't install folio
> > > > > to pte entry since some changes happened such as concurrent fork of entry.
> > > > >
> > > >
> > > > The first page fault returned VM_FAULT_RETRY because
> > > > folio_lock_or_retry() failed.
> > >
> >
> > Hi Yosry,
> >
> > > How so? The folio is newly allocated and not visible to any other
> > > threads or CPUs. swap_read_folio() unlocks it and then returns and we
> > > immediately try to lock it again with folio_lock_or_retry(). How does
> > > this fail?
> >
> > Haha, it makes me very confused. Based on the steps to reproduce the problem,
> > I think the page is locked by shrink_folio_list(). Please see the
> > following situation.
> >
> > do_swap_page
> >       __folio_set_locked(folio);
> >      swap_readpage(page, true, NULL);
> >           zswap_load(folio)
> >           folio_unlock(folio);
> >
> >     shrink_folio_list
> >
> >     if (!folio_trylock(folio))
> >       ret |= folio_lock_or_retry(folio, vmf);
> >       if (ret & VM_FAULT_RETRY)
> >            goto out_release;
>
> Thanks for the detailed bug report. So this means the folio
> immediately gets reclaimed after zswap_load(), before do_swap_page
> returns, right?
>
> We also need to audit if there is any other code path in the
> do_swap_page that can fail a swap fault and not store the folio into
> the swap cache.
>
> Chris
>
> >
> > Thanks.
> >
> > >
> > > Let's go over what happens after swap_read_folio():
> > > - The 'if (!folio)' code block will be skipped.
> > > - folio_lock_or_retry() should succeed as I mentioned earlier.
> > > - The 'if (swapcache)' code block will be skipped.
> > > - The pte_same() check should succeed on first look because other
> > > concurrent faulting threads should be held off by the newly introduced
> > > swapcache_prepare() logic. But looking deeper I think this one may
> > > fail due to a concurrent MADV_WILLNEED.
> > > - The 'if (unlikely(!folio_test_uptodate(folio)))` part will be
> > > skipped because swap_read_folio() marks the folio up-to-date.
> > > - After that point there is no possible failure until we install the
> > > pte, at which point concurrent faults will fail on !pte_same() and
> > > retry.
> > >
> > > So the only failure I think is possible is the pte_same() check. I see
> > > how a concurrent MADV_WILLNEED could cause that check to fail. A
> > > concurrent MADV_WILLNEED will block on swapcache_prepare(), but once
> > > the fault resolves it will go ahead and read the folio again into the
> > > swapcache. It seems like we will end up with two copies of the same
> > > folio? Maybe this is harmless because the folio in the swacache will
> > > never be used, but it is essentially leaked at that point, right?

right. it has got a good fix here to avoid immediate release by zswap:
https://lore.kernel.org/linux-mm/20240322234826.GA448621@xxxxxxxxxxx/

> > >
> > > I feel like I am missing something. Adding other folks that were
> > > involved in the recent swapcache_prepare() synchronization thread.
> > >
> > > Anyway, I agree that at least in theory the data corruption could
> > > happen because of exclusive loads when skipping the swapcache, and we
> > > should fix that.
> > >
> > > Perhaps the right thing to do may be to write the folio again to zswap
> > > before unlocking it and before calling swapcache_clear(). The need for
> > > the write can be detected by checking if the folio is dirty, I think
> > > this will only be true if the folio was loaded from zswap.
> >





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux