Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 21, 2024 at 11:25 PM Nhat Pham <nphamcs@xxxxxxxxx> wrote:
>
> On Thu, Mar 21, 2024 at 2:28 AM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> >
> > On 2024/3/21 14:36, Zhongkun He wrote:
> > > On Thu, Mar 21, 2024 at 1:24 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> > >>
> > >> On 2024/3/21 13:09, Zhongkun He wrote:
> > >>> On Thu, Mar 21, 2024 at 12:42 PM Chengming Zhou
> > >>> <chengming.zhou@xxxxxxxxx> wrote:
> > >>>>
> > >>>> On 2024/3/21 12:34, Zhongkun He wrote:
> > >>>>> Hey folks,
> > >>>>>
> > >>>>> Recently, I tested the zswap with memory reclaiming in the mainline
> > >>>>> (6.8) and found a memory corruption issue related to exclusive loads.
> > >>>>
> > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
> > >>>> This fix avoids concurrent swapin using the same swap entry.
> > >>>>
> > >>>
> > >>> Yes, This fix avoids concurrent swapin from different cpu, but the
> > >>> reported issue occurs
> > >>> on the same cpu.
> > >>
> > >> I think you may misunderstand the race description in this fix changelog,
> > >> the CPU0 and CPU1 just mean two concurrent threads, not real two CPUs.
> > >>
> > >> Could you verify if the problem still exists with this fix?
> > >
> > > Yes,I'm sure the problem still exists with this patch.
> > > There is some debug info, not mainline.
> > >
> > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n%s",
> > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --include
> > > linux/mm_types.h
> >
> > Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap backends,
> > which now include zram, ramdisk, pmem, nvdimm.
> >
> > It maybe not good to use zswap on these swap backends?

Hi Nhat,

>
> My gut reaction is to say yes, but I'll refrain from making sweeping
> statements about backends I'm not too familiar with. Let's see:
>
> 1. zram: I don't even know why we're putting a compressed cache... in
> front of a compressed faux swap device? Ramdisk == other in-memory
> swap backend right?

It is currently for testing, and will be applied online later as a
temporary solution
to prevent performance jitter.

> 2. I looked it up, and it seemed SWP_SYNCHRONOUS_IO was introduced for
> fast swap storage (see the original patch series [1]). If this is the
> case, one could argue there are diminishing returns for applying zswap
> on top of this.
>

sounds good.

> [1]: https://lore.kernel.org/linux-mm/1505886205-9671-1-git-send-email-minchan@xxxxxxxxxx/
>
> >
> > The problem here is the page fault handler tries to skip swapcache to
> > swapin the folio (swap entry count == 1), but then it can't install folio
> > to pte entry since some changes happened such as concurrent fork of entry.
> >
> > Maybe we should writeback that folio in this special case.
>
> But yes, if this is simple maybe we can do this first to fix the bug?
>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux