On Sat, Mar 23, 2024 at 10:41:32AM +1300, Barry Song wrote: > On Sat, Mar 23, 2024 at 8:38 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > > > > On Fri, Mar 22, 2024 at 9:40 AM <chengming.zhou@xxxxxxxxx> wrote: > > > > > > From: Chengming Zhou <chengming.zhou@xxxxxxxxx> > > > > > > There is a report of data corruption caused by double swapin, which is > > > only possible in the skip swapcache path on SWP_SYNCHRONOUS_IO backends. > > > > > > The root cause is that zswap is not like other "normal" swap backends, > > > it won't keep the copy of data after the first time of swapin. So if > > I don't quite understand this, so once we load a page from zswap, zswap > will free it even though do_swap_page might not set it to PTE? > > shouldn't zswap free the memory after notify_free just like zram? It's an optimization that zswap has, exclusive loads. After a page is swapped in it can stick around in the swapcache for a while. In this case, there would be two copies in memory with zram (compressed and uncompressed). Zswap implements exclusive loads to drop the compressed copy. The folio is marked as dirty so that any attempts to reclaim it cause a new write (compression) to zswap. It is also for a lot of cleanups and straightforward entry lifetime tracking in zswap. It is mostly fine, the problem here happens because we skip the swapcache during swapin, so there is a possibility that we load the folio from zswap then just drop it without stashing it anywhere. > > > > the folio in the first time of swapin can't be installed in the pagetable > > > successfully and we just free it directly. Then in the second time of > > > swapin, we can't find anything in zswap and read wrong data from swapfile, > > > so this data corruption problem happened. > > > > > > We can fix it by always adding the folio into swapcache if we know the > > > pinned swap entry can be found in zswap, so it won't get freed even though > > > it can't be installed successfully in the first time of swapin. > > > > A concurrent faulting thread could have already checked the swapcache > > before we add the folio to it, right? In this case, that thread will > > go ahead and call swap_read_folio() anyway. > > > > Also, I suspect the zswap lookup might hurt performance. Would it be > > better to add the folio back to zswap upon failure? This should be > > detectable by checking if the folio is dirty as I mentioned in the bug > > report thread. > > I don't like the idea either as sync-io is the fast path for zram etc. > or, can we use > the way of zram to free compressed data? I don't think we want to stop doing exclusive loads in zswap due to this interaction with zram, which shouldn't be common. I think we can solve this by just writing the folio back to zswap upon failure as I mentioned.