On Tue, Feb 27, 2024 at 10:03:46AM +0000, Yosry Ahmed wrote: > In zswap_writeback_entry(), after we get a folio from > __read_swap_cache_async(), we grab the tree lock again to check that the > swap entry was not invalidated and recycled. If it was, we delete the > folio we just added to the swap cache and exit. > > However, __read_swap_cache_async() returns the folio locked when it is > newly allocated, which is always true for this path, and the folio is > ref'd. Make sure to unlock and put the folio before returning. > > This was discovered by code inspection, probably because this path handles > a race condition that should not happen often, and the bug would not crash > the system, it will only strand the folio indefinitely. > > Link: https://lkml.kernel.org/r/20240125085127.1327013-1-yosryahmed@xxxxxxxxxx > Fixes: 04fc7816089c ("mm: fix zswap writeback race condition") > Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> > Reviewed-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> > Reviewed-by: Nhat Pham <nphamcs@xxxxxxxxx> > Cc: Domenico Cerasuolo <cerasuolodomenico@xxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > (cherry picked from commit e3b63e966cac0bf78aaa1efede1827a252815a1d) For obvious reasons, I can't take a patch only for 6.1, and not for newer kernel releases (i.e. 6.6.y) as then there would be a regression. Can you please provide a backport for that tree and then we can take this one. thanks, greg k-h