On 2024/3/25 17:40, Yosry Ahmed wrote: > On Mon, Mar 25, 2024 at 2:22 AM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote: >> >> On 2024/3/25 16:38, Yosry Ahmed wrote: >>> On Mon, Mar 25, 2024 at 12:33 AM Chengming Zhou >>> <chengming.zhou@xxxxxxxxx> wrote: >>>> >>>> On 2024/3/25 15:06, Yosry Ahmed wrote: >>>>> On Sun, Mar 24, 2024 at 9:54 PM Barry Song <21cnbao@xxxxxxxxx> wrote: >>>>>> >>>>>> On Mon, Mar 25, 2024 at 10:23 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: >>>>>>> >>>>>>> On Sun, Mar 24, 2024 at 2:04 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote: >>>>>>>> >>>>>>>> Zhongkun He reports data corruption when combining zswap with zram. >>>>>>>> >>>>>>>> The issue is the exclusive loads we're doing in zswap. They assume >>>>>>>> that all reads are going into the swapcache, which can assume >>>>>>>> authoritative ownership of the data and so the zswap copy can go. >>>>>>>> >>>>>>>> However, zram files are marked SWP_SYNCHRONOUS_IO, and faults will try >>>>>>>> to bypass the swapcache. This results in an optimistic read of the >>>>>>>> swap data into a page that will be dismissed if the fault fails due to >>>>>>>> races. In this case, zswap mustn't drop its authoritative copy. >>>>>>>> >>>>>>>> Link: https://lore.kernel.org/all/CACSyD1N+dUvsu8=zV9P691B9bVq33erwOXNTmEaUbi9DrDeJzw@xxxxxxxxxxxxxx/ >>>>>>>> Reported-by: Zhongkun He <hezhongkun.hzk@xxxxxxxxxxxxx> >>>>>>>> Fixes: b9c91c43412f ("mm: zswap: support exclusive loads") >>>>>>>> Cc: stable@xxxxxxxxxxxxxxx [6.5+] >>>>>>>> Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> >>>>>>>> Tested-by: Zhongkun He <hezhongkun.hzk@xxxxxxxxxxxxx> >>>>>> >>>>>> Acked-by: Barry Song <baohua@xxxxxxxxxx> >>>>>> >>>>>>> >>>>>>> Do we also want to mention somewhere (commit log or comment) that >>>>>>> keeping the entry in the tree is fine because we are still protected >>>>>>> from concurrent loads/invalidations/writeback by swapcache_prepare() >>>>>>> setting SWAP_HAS_CACHE or so? >>>>>> >>>>>> It seems that Kairui's patch comprehensively addresses the issue at hand. >>>>>> Johannes's solution, on the other hand, appears to align zswap behavior >>>>>> more closely with that of a traditional swap device, only releasing an entry >>>>>> when the corresponding swap slot is freed, particularly in the sync-io case. >>>>> >>>>> It actually worked out quite well that Kairui's fix landed shortly >>>>> before this bug was reported, as this fix wouldn't have been possible >>>>> without it as far as I can tell. >>>>> >>>>>> >>>>>> Johannes' patch has inspired me to consider whether zRAM could achieve >>>>>> a comparable outcome by immediately releasing objects in swap cache >>>>>> scenarios. When I have the opportunity, I plan to experiment with zRAM. >>>>> >>>>> That would be interesting. I am curious if it would be as >>>>> straightforward in zram to just mark the folio as dirty in this case >>>>> like zswap does, given its implementation as a block device. >>>>> >>>> >>>> This makes me wonder who is responsible for marking folio dirty in this swapcache >>>> bypass case? Should we call folio_mark_dirty() after the swap_read_folio()? >>> >>> In shrink_folio_list(), we try to add anonymous folios to the >>> swapcache if they are not there before checking if they are dirty. >>> add_to_swap() calls folio_mark_dirty(), so this should take care of >> >> Right, thanks for your clarification, so should be no problem here. >> Although it was a fix just for MADV_FREE case. >> >>> it. There is an interesting comment there though. It says that PTE >>> should be dirty, so unmapping the folio should have already marked it >>> as dirty by the time we are adding it to the swapcache, except for the >>> MADV_FREE case. >> >> It seems to say the folio will be dirtied when unmap later, supposing the >> PTE is dirty. > > Oh yeah it could mean that the folio will be dirted later. > >> >>> >>> However, I think we actually unmap the folio after we add it to the >>> swapcache in shrink_folio_list(). Also, I don't immediately see why >>> the PTE would be dirty. In do_swap_page(), making the PTE dirty seems >> >> If all anon pages on LRU list are faulted by write, it should be true. >> We could just use the zero page if faulted by read, right? > > This applies for the initial fault that creates the folio, but this is > a swap fault. It could be a read fault and in that case we still need > to make the folio dirty because it's not in the swapcache and we need > to write it out if it's reclaimed, right? Yes, IMHO I think it should be marked as dirty here. But it should be no problem with that unconditional folio_mark_dirty() in add_to_swap(). Not sure if there are other issues. > >> >>> to be conditional on the fault being a write fault, but I didn't look >>> thoroughly, maybe I missed it. It is also possible that the comment is >>> just outdated. >> >> Yeah, dirty is only marked on write fault. >> >> Thanks.