Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > // lru list lock held
> > shrink_memcg_cb()
> >   swpentry = entry->swpentry
> >   // Don't isolate entry from lru list here, just use list_lru_putback()
> >   spin_unlock(lru list lock)
> >
> >   folio = __read_swap_cache_async(swpentry)
> >   if (!folio)
> >     return
> >
> >   if (!folio_was_allocated)
> >     folio_put(folio)
> >     return
> >
> >   // folio is locked, swapcache is secured against swapoff
> >   tree = get tree from swpentry
> >   spin_lock(&tree->lock)
>
> That will not work well with zswap to xarray change. We want to remove
> the tree lock and only use the xarray lock.
> The lookup should just hold xarray RCU read lock and return the entry
> with ref count increased.

In this path, we also invalidate the zswap entry, which would require
holding the xarray lock anyway.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux