Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 23, 2024 at 3:48 AM Chris Li <chrisl@xxxxxxxxxx> wrote:
>
> On Fri, Mar 22, 2024 at 5:35 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> >
> > On Sat, Mar 23, 2024 at 12:42 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > >
> > > On Fri, Mar 22, 2024 at 4:38 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > > >
> > > > On Sat, Mar 23, 2024 at 12:35 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Fri, Mar 22, 2024 at 4:32 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > > > > >
> > > > > > On Sat, Mar 23, 2024 at 12:23 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > On Fri, Mar 22, 2024 at 4:18 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > > > > > > >
> > > > > > > > On Sat, Mar 23, 2024 at 12:09 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > > > > > > >
> > > > > > > > > On Fri, Mar 22, 2024 at 4:04 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > > > > > > > > >
> > > > > > > > > > On Sat, Mar 23, 2024 at 8:35 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Mar 21, 2024 at 8:04 PM Zhongkun He
> > > > > > > > > > > <hezhongkun.hzk@xxxxxxxxxxxxx> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > On Thu, Mar 21, 2024 at 5:29 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > On 2024/3/21 14:36, Zhongkun He wrote:
> > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 1:24 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> > > > > > > > > > > > > >>
> > > > > > > > > > > > > >> On 2024/3/21 13:09, Zhongkun He wrote:
> > > > > > > > > > > > > >>> On Thu, Mar 21, 2024 at 12:42 PM Chengming Zhou
> > > > > > > > > > > > > >>> <chengming.zhou@xxxxxxxxx> wrote:
> > > > > > > > > > > > > >>>>
> > > > > > > > > > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote:
> > > > > > > > > > > > > >>>>> Hey folks,
> > > > > > > > > > > > > >>>>>
> > > > > > > > > > > > > >>>>> Recently, I tested the zswap with memory reclaiming in the mainline
> > > > > > > > > > > > > >>>>> (6.8) and found a memory corruption issue related to exclusive loads.
> > > > > > > > > > > > > >>>>
> > > > > > > > > > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
> > > > > > > > > > > > > >>>> This fix avoids concurrent swapin using the same swap entry.
> > > > > > > > > > > > > >>>>
> > > > > > > > > > > > > >>>
> > > > > > > > > > > > > >>> Yes, This fix avoids concurrent swapin from different cpu, but the
> > > > > > > > > > > > > >>> reported issue occurs
> > > > > > > > > > > > > >>> on the same cpu.
> > > > > > > > > > > > > >>
> > > > > > > > > > > > > >> I think you may misunderstand the race description in this fix changelog,
> > > > > > > > > > > > > >> the CPU0 and CPU1 just mean two concurrent threads, not real two CPUs.
> > > > > > > > > > > > > >>
> > > > > > > > > > > > > >> Could you verify if the problem still exists with this fix?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Yes,I'm sure the problem still exists with this patch.
> > > > > > > > > > > > > > There is some debug info, not mainline.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n%s",
> > > > > > > > > > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --include
> > > > > > > > > > > > > > linux/mm_types.h
> > > > > > > > > > > > >
> > > > > > > > > > > > > Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap backends,
> > > > > > > > > > > > > which now include zram, ramdisk, pmem, nvdimm.
> > > > > > > > > > > >
> > > > > > > > > > > > Yes.
> > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > It maybe not good to use zswap on these swap backends?
> > > > > > > > > > > > >
> > > > > > > > > > > > > The problem here is the page fault handler tries to skip swapcache to
> > > > > > > > > > > > > swapin the folio (swap entry count == 1), but then it can't install folio
> > > > > > > > > > > > > to pte entry since some changes happened such as concurrent fork of entry.
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > The first page fault returned VM_FAULT_RETRY because
> > > > > > > > > > > > folio_lock_or_retry() failed.
> > > > > > > > > > >
> > > > > > > > > > > How so? The folio is newly allocated and not visible to any other
> > > > > > > > > > > threads or CPUs. swap_read_folio() unlocks it and then returns and we
> > > > > > > > > > > immediately try to lock it again with folio_lock_or_retry(). How does
> > > > > > > > > > > this fail?
> > > > > > > > > > >
> > > > > > > > > > > Let's go over what happens after swap_read_folio():
> > > > > > > > > > > - The 'if (!folio)' code block will be skipped.
> > > > > > > > > > > - folio_lock_or_retry() should succeed as I mentioned earlier.
> > > > > > > > > > > - The 'if (swapcache)' code block will be skipped.
> > > > > > > > > > > - The pte_same() check should succeed on first look because other
> > > > > > > > > > > concurrent faulting threads should be held off by the newly introduced
> > > > > > > > > > > swapcache_prepare() logic. But looking deeper I think this one may
> > > > > > > > > > > fail due to a concurrent MADV_WILLNEED.
> > > > > > > > > > > - The 'if (unlikely(!folio_test_uptodate(folio)))` part will be
> > > > > > > > > > > skipped because swap_read_folio() marks the folio up-to-date.
> > > > > > > > > > > - After that point there is no possible failure until we install the
> > > > > > > > > > > pte, at which point concurrent faults will fail on !pte_same() and
> > > > > > > > > > > retry.
> > > > > > > > > > >
> > > > > > > > > > > So the only failure I think is possible is the pte_same() check. I see
> > > > > > > > > > > how a concurrent MADV_WILLNEED could cause that check to fail. A
> > > > > > > > > > > concurrent MADV_WILLNEED will block on swapcache_prepare(), but once
> > > > > > > > > > > the fault resolves it will go ahead and read the folio again into the
> > > > > > > > > > > swapcache. It seems like we will end up with two copies of the same
> > > > > > > > > >
> > > > > > > > > > but zswap has freed the object when the do_swap_page finishes swap_read_folio
> > > > > > > > > > due to exclusive load feature of zswap?
> > > > > > > > > >
> > > > > > > > > > so WILLNEED will get corrupted data and put it into swapcache.
> > > > > > > > > > some other concurrent new forked process might get the new data
> > > > > > > > > > from the swapcache WILLNEED puts when the new-forked process
> > > > > > > > > > goes into do_swap_page.
> > > > > > > > >
> > > > > > > > > Oh I was wondering how synchronization with WILLNEED happens without
> > > > > > > > > zswap. It seems like we could end up with two copies of the same folio
> > > > > > > > > and one of them will be leaked unless I am missing something.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > so very likely a new process is forked right after do_swap_page finishes
> > > > > > > > > > swap_read_folio and before swapcache_clear.
> > > > > > > > > >
> > > > > > > > > > > folio? Maybe this is harmless because the folio in the swacache will
> > > > > > > > > > > never be used, but it is essentially leaked at that point, right?
> > > > > > > > > > >
> > > > > > > > > > > I feel like I am missing something. Adding other folks that were
> > > > > > > > > > > involved in the recent swapcache_prepare() synchronization thread.
> > > > > > > > > > >
> > > > > > > > > > > Anyway, I agree that at least in theory the data corruption could
> > > > > > > > > > > happen because of exclusive loads when skipping the swapcache, and we
> > > > > > > > > > > should fix that.
> > > > > > > > > > >
> > > > > > > > > > > Perhaps the right thing to do may be to write the folio again to zswap
> > > > > > > > > > > before unlocking it and before calling swapcache_clear(). The need for
> > > > > > > > > > > the write can be detected by checking if the folio is dirty, I think
> > > > > > > > > > > this will only be true if the folio was loaded from zswap.
> > > > > > > > > >
> > > > > > > > > > we only need to write when we know swap_read_folio() gets data
> > > > > > > > > > from zswap but not swapfile. is there a quick way to do this?
> > > > > > > > >
> > > > > > > > > The folio will be dirty when loaded from zswap, so we can check if the
> > > > > > > > > folio is dirty and write the page if fail after swap_read_folio().
> > > > > > > >
> > > > > > > > Is it actually a bug of swapin_walk_pmd_entry? it only check pte
> > > > > > > > before read_swap_cache_async. but when read_swap_cache_async
> > > > > > > > is blocked by swapcache_prepare, after it gets the swapcache_prepare
> > > > > > > > successfully , someone else should have already set the pte and freed
> > > > > > > > the swap slot even if this is not zswap?
> > > > > > >
> > > > > > > If someone freed the swap slot then swapcache_prepare() should fail,
> > > > > > > but the swap entry could have been recycled after we dropped the pte
> > > > > > > lock, right?
> > > > > > >
> > > > > > > Anyway, yeah, I think there might be a bug here irrelevant to zswap.
> > > > > > >
> > > > > > > >
> > > > > > > > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
> > > > > > > >                 unsigned long end, struct mm_walk *walk)
> > > > > > > > {
> > > > > > > >         struct vm_area_struct *vma = walk->private;
> > > > > > > >         struct swap_iocb *splug = NULL;
> > > > > > > >         pte_t *ptep = NULL;
> > > > > > > >         spinlock_t *ptl;
> > > > > > > >         unsigned long addr;
> > > > > > > >
> > > > > > > >         for (addr = start; addr < end; addr += PAGE_SIZE) {
> > > > > > > >                 pte_t pte;
> > > > > > > >                 swp_entry_t entry;
> > > > > > > >                 struct folio *folio;
> > > > > > > >
> > > > > > > >                 if (!ptep++) {
> > > > > > > >                         ptep = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
> > > > > > > >                         if (!ptep)
> > > > > > > >                                 break;
> > > > > > > >                 }
> > > > > > > >
> > > > > > > >                 pte = ptep_get(ptep);
> > > > > > > >                 if (!is_swap_pte(pte))
> > > > > > > >                         continue;
> > > > > > > >                 entry = pte_to_swp_entry(pte);
> > > > > > > >                 if (unlikely(non_swap_entry(entry)))
> > > > > > > >                         continue;
> > > > > > > >
> > > > > > > >                 pte_unmap_unlock(ptep, ptl);
> > > > > > > >                 ptep = NULL;
> > > > > > > >
> > > > > > > >                 folio = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE,
> > > > > > > >                                              vma, addr, &splug);
> > > > > > > >                 if (folio)
> > > > > > > >                         folio_put(folio);
> > > > > > > >         }
> > > > > > > >
> > > > > > > >         if (ptep)c
> > > > > > > >                 pte_unmap_unlock(ptep, ptl);
> > > > > > > >         swap_read_unplug(splug);
> > > > > > > >         cond_resched();
> > > > > > > >
> > > > > > > >         return 0;
> > > > > > > > }
> > > > > > > >
> > > > > > > > I mean pte can become non-swap within read_swap_cache_async(),
> > > > > > > > so no matter if it is zswap, we have the bug.
> > > > > >
> > > > > > checked again,  probably still a zswap issue, as swapcache_prepare can detect
> > > > > > real swap slot free :-)
> > > > > >
> > > > > >                 /*
> > > > > >                  * Swap entry may have been freed since our caller observed it.
> > > > > >                  */
> > > > > >                 err = swapcache_prepare(entry);
> > > > > >                 if (!err)
> > > > > >                         break;
> > > > > >
> > > > > >
> > > > > > zswap exslusive load isn't a real swap free.
> > > > > >
> > > > > > But probably we have found the timing which causes the issue at least :-)
> > > > >
> > > > > The problem I was referring to is with the swapin fault path that
> > > > > skips the swapcache vs. MADV_WILLNEED. The fault path could swapin the
> > > > > page and skip the swapcache, and MADV_WILLNEED could swap it in again
> > > > > into the swapcache. We would end up with two copies of the folio.
> > > >
> > > > right. i feel like we have to re-check pte is not changed within
> > > > __read_swap_cache_async after swapcache_prepare succeed
> > > > after being blocked for a while as the previous entry could have
> > > > been freed and re-allocted by someone else -  a completely
> > > > different process. then we get read other processes' data.
> >
> > >
> > > This is only a problem when we skip the swapcache during swapin.
> > > Otherwise the swapcache synchronizes this. I wonder how much does
> > > skipping the swapcache buy us on recent kernels? This optimization was
> > > introduced a long time ago.
> >
> > Still performs quite good. according to kairui's data:
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=13ddaf26be324a7f951891ecd9ccd04466d27458
> >
> > Before: 10934698 us
> > After: 11157121 us
> > Cached: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
> >
> > BTW, zram+zswap seems pointless from the first beginning. it seems a wrong
> > configuration for users.  if this case is really happening, could we
> > simply fix it
> > by:
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index b7cab8be8632..6742d1428373 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -3999,7 +3999,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >         swapcache = folio;
> >
> >         if (!folio) {
> > -               if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
> > +               if (!is_zswap_enabled() && data_race(si->flags &
> > SWP_SYNCHRONOUS_IO) &&
>
> Because zswap_enable can change at run time due to the delay setup of zswap.
>
> This has the time-of-check to time-of-use issue.

Never mind that, I just realized that even if zswap was enabled, the
data race does not affect the current swap entry, which already
swapped out before the change of zswap_enable.

Chris


>
> Maybe moving to the zswap_store() is better.
>
> Something like this.
>
> Zhongkun, can you verify with this change the bug will go away?
>
> Chris
>
>
>     zswap: disable SWP_SYNCRNOUS_IO in zswap_store
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index f04a75a36236..f40778adefa3 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1472,6 +1472,7 @@ bool zswap_store(struct folio *folio)
>         struct obj_cgroup *objcg = NULL;
>         struct mem_cgroup *memcg = NULL;
>         unsigned long max_pages, cur_pages;
> +       struct swap_info_struct *si = NULL;
>
>         VM_WARN_ON_ONCE(!folio_test_locked(folio));
>         VM_WARN_ON_ONCE(!folio_test_swapcache(folio));
> @@ -1483,6 +1484,18 @@ bool zswap_store(struct folio *folio)
>         if (!zswap_enabled)
>                 goto check_old;
>
> +       /* Prevent swapoff from happening to us. */
> +       si = get_swap_device(swp);
> +       if (si) {
> +               /*
> +                * SWP_SYNCRONOUS_IO bypass swap cache, not compatible
> +                * with zswap exclusive load.
> +                */
> +               if (data_race(si->flags & SWP_SYNCHRONOUS_IO))
> +                       si->flags &= ~ SWP_SYNCHRONOUS_IO;
> +               put_swap_device(si);
> +       }
> +
>         /* Check cgroup limits */
>         objcg = get_obj_cgroup_from_folio(folio);
>         if (objcg && !obj_cgroup_may_zswap(objcg)) {





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux