On Thu, Jan 25, 2024 at 11:04 AM Chris Li <chrisl@xxxxxxxxxx> wrote: > > On Thu, Jan 25, 2024 at 12:02 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > > > > > > // lru list lock held > > > > shrink_memcg_cb() > > > > swpentry = entry->swpentry > > > > // Don't isolate entry from lru list here, just use list_lru_putback() > > > > spin_unlock(lru list lock) > > > > > > > > folio = __read_swap_cache_async(swpentry) > > > > if (!folio) > > > > return > > > > > > > > if (!folio_was_allocated) > > > > folio_put(folio) > > > > return > > > > > > > > // folio is locked, swapcache is secured against swapoff > > > > tree = get tree from swpentry > > > > spin_lock(&tree->lock) > > > > > > That will not work well with zswap to xarray change. We want to remove > > > the tree lock and only use the xarray lock. > > > The lookup should just hold xarray RCU read lock and return the entry > > > with ref count increased. > > > > In this path, we also invalidate the zswap entry, which would require > > holding the xarray lock anyway. > > It will drop the RCU read lock after finding the entry and re-acquire > the xarray spin lock on invalidation. In between there is a brief > moment without locks. If my understanding is correct, at that point in the code writeback is guaranteed to succeed unless the entry had already been removed from the tree. So we can use xa_cmpxchg() as I described earlier to find and remove the entry in the tree only if it exists. If it does, we continue with the writeback, otherwise we abort. No need for separate load and invalidation. zswap_invalidate_entry() can return a boolean (whether the entry was found and removed from the tree or not).