Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/3/22 02:32, Yosry Ahmed wrote:
> On Thu, Mar 21, 2024 at 08:25:26AM -0700, Nhat Pham wrote:
>> On Thu, Mar 21, 2024 at 2:28 AM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
>>>
>>> On 2024/3/21 14:36, Zhongkun He wrote:
>>>> On Thu, Mar 21, 2024 at 1:24 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
>>>>>
>>>>> On 2024/3/21 13:09, Zhongkun He wrote:
>>>>>> On Thu, Mar 21, 2024 at 12:42 PM Chengming Zhou
>>>>>> <chengming.zhou@xxxxxxxxx> wrote:
>>>>>>>
>>>>>>> On 2024/3/21 12:34, Zhongkun He wrote:
>>>>>>>> Hey folks,
>>>>>>>>
>>>>>>>> Recently, I tested the zswap with memory reclaiming in the mainline
>>>>>>>> (6.8) and found a memory corruption issue related to exclusive loads.
>>>>>>>
>>>>>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
>>>>>>> This fix avoids concurrent swapin using the same swap entry.
>>>>>>>
>>>>>>
>>>>>> Yes, This fix avoids concurrent swapin from different cpu, but the
>>>>>> reported issue occurs
>>>>>> on the same cpu.
>>>>>
>>>>> I think you may misunderstand the race description in this fix changelog,
>>>>> the CPU0 and CPU1 just mean two concurrent threads, not real two CPUs.
>>>>>
>>>>> Could you verify if the problem still exists with this fix?
>>>>
>>>> Yes,I'm sure the problem still exists with this patch.
>>>> There is some debug info, not mainline.
>>>>
>>>> bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n%s",
>>>> ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --include
>>>> linux/mm_types.h
>>>
>>> Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap backends,
>>> which now include zram, ramdisk, pmem, nvdimm.
>>>
>>> It maybe not good to use zswap on these swap backends?
>>
>> My gut reaction is to say yes, but I'll refrain from making sweeping
>> statements about backends I'm not too familiar with. Let's see:
>>
>> 1. zram: I don't even know why we're putting a compressed cache... in
>> front of a compressed faux swap device? Ramdisk == other in-memory
>> swap backend right?
> 
> I personally use it for testing because it's easy, but I doubt any prod
> setups actually do that. That being said, I don't think we need to
> disable zswap completely for these swap backends just to address this
> bug.

Right, agree! We'd better fix it.

> 
>> 2. I looked it up, and it seemed SWP_SYNCHRONOUS_IO was introduced for
>> fast swap storage (see the original patch series [1]). If this is the
>> case, one could argue there are diminishing returns for applying zswap
>> on top of this.
>>
>> [1]: https://lore.kernel.org/linux-mm/1505886205-9671-1-git-send-email-minchan@xxxxxxxxxx/
>>
>>>
>>> The problem here is the page fault handler tries to skip swapcache to
>>> swapin the folio (swap entry count == 1), but then it can't install folio
>>> to pte entry since some changes happened such as concurrent fork of entry.
>>>
>>> Maybe we should writeback that folio in this special case.
>>
>> But yes, if this is simple maybe we can do this first to fix the bug?
> 
> Can we just enforce using the swapcache if zswap is in-use? We cannot
> simply check if zswap is enabled, because it could be the case that we
> stored some pages into zswap then disabled it.
> 
> Perhaps we could keep track of whether zswap was ever enabled or if any
> pages were ever stored in zswap, and skip the no swap cache optimization
> then?

Hmm, this way we have to add something to the swap_info_struct, to check
if it has used zswap or not.

Another way I can think of is to add that folio to the swapcache if we
can't install it successfully, so next time it can find it in swapcache.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux