On Tue, Oct 22, 2024 at 5:46 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > > [..] > > >> @@ -1576,6 +1576,52 @@ bool zswap_store(struct folio *folio) > > >> return ret; > > >> } > > >> > > >> +static bool swp_offset_in_zswap(unsigned int type, pgoff_t offset) > > >> +{ > > >> + return (offset >> SWAP_ADDRESS_SPACE_SHIFT) < nr_zswap_trees[type]; > > > > > > I am not sure I understand what we are looking for here. When does > > > this return false? Aren't the zswap trees always allocated during > > > swapon? > > > > > > > Hi Yosry, > > > > Thanks for the review! > > > > It becomes useful in patch 3 when trying to determine if a large folio can be allocated. > > > > For e.g. if the swap entry is the last entry of the last tree, and 1M folios are enabled > > (nr_pages = 256), then the while loop in zswap_present_test will try to access a tree > > that doesn't exist from the 2nd 4K page onwards if we dont have this check in > > zswap_present_test. > > Doesn't swap_pte_batch() make sure that the range of swap entries > passed here all corresponds to existing swap entries, and those > entries should always have a corresponding zswap tree? How can the > passed in range contain an entry that is not in any zswap tree? > > I feel like I am missing something. > > > > > >> +} > > >> + > > >> +/* Returns true if the entire folio is in zswap */ > > > > > > There isn't really a folio at this point, maybe "Returns true if the > > > entire range is in zswap"? > > > > Will change, Thanks! > > > > > > > > Also, this is racy because an exclusive load, invalidation, or > > > writeback can cause an entry to be removed from zswap. Under what > > > conditions is this safe? The caller can probably guarantee we don't > > > race against invalidation, but can we guarantee that concurrent > > > exclusive loads or writebacks don't happen? > > > > > > If the answer is yes, this needs to be properly documented. > > > > swapcache_prepare should stop things from becoming racy. > > > > lets say trying to swapin a mTHP of size 32 pages: > > - T1 is doing do_swap_page, T2 is doing zswap_writeback. > > - T1 - Check if the entire 32 pages is in zswap, swapcache_prepare(entry, nr_pages) in do_swap_page is not yet called. > > - T2 - zswap_writeback_entry starts and lets say writes page 2 to swap. it calls __read_swap_cache_async -> swapcache_prepare increments swap_map count, writes page to swap. > > Can the folio be dropped from the swapcache at this point (e.g. by > reclaim)? If yes, it seems like swapcache_prepare() can succeed and > try to read it from zswap. > I think you're onto something. Can this happen: say T2 writebacks a couple of tail pages, but not all of them, then drops everything from swap cache. Then T1 can definitely proceed. It would then check again in zswap_load(), which returns false (thanks to zswap_present()) test. All fine and good so far, but then in swap_read_folio(), it would try to fall back to reading the entire large folio from swapfile, which will contain bogus data in pages that have not been written back by T2. I think the problem is swap_read_folio() assumes it always succeeds, and a precondition for successful reading is the large folio must have no mixed backing state for its subpages, which we fail to guarantee before entering swap_read_folio(). This can lead to memory corruption. Buuut, I think all we need to do is just check the backing state again after T1's swapcache_prepare() call. At this point, we are guaranteed to have a stable backing state. If it fails here, then we can just exit and fall back to individual page swapping in.