On 2024/1/25 16:51, Yosry Ahmed wrote: > In zswap_writeback_entry(), after we get a folio from > __read_swap_cache_async(), we grab the tree lock again to check that the > swap entry was not invalidated and recycled. If it was, we delete the > folio we just added to the swap cache and exit. > > However, __read_swap_cache_async() returns the folio locked when it is > newly allocated, which is always true for this path, and the folio is > ref'd. Make sure to unlock and put the folio before returning. > > This was discovered by code inspection, probably because this path > handles a race condition that should not happen often, and the bug would > not crash the system, it will only strand the folio indefinitely. > > Fixes: 04fc7816089c ("mm: fix zswap writeback race condition") > Cc: stable@xxxxxxxxxxxxxxx > Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> LGTM, thanks! Reviewed-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> > --- > mm/zswap.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/zswap.c b/mm/zswap.c > index 8f4a7efc2bdae..00e90b9b5417d 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -1448,6 +1448,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, > if (zswap_rb_search(&tree->rbroot, swp_offset(entry->swpentry)) != entry) { > spin_unlock(&tree->lock); > delete_from_swap_cache(folio); > + folio_unlock(folio); > + folio_put(folio); > return -ENOMEM; > } > spin_unlock(&tree->lock);