On 6/8/24 05:36, Yosry Ahmed wrote: > diff --git a/mm/zswap.c b/mm/zswap.c > index b9b35ef86d9be..ebb878d3e7865 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -1557,6 +1557,26 @@ bool zswap_load(struct folio *folio) > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > > + /* > + * Large folios should not be swapped in while zswap is being used, as > + * they are not properly handled. Zswap does not properly load large > + * folios, and a large folio may only be partially in zswap. > + * > + * If any of the subpages are in zswap, reading from disk would result > + * in data corruption, so return true without marking the folio uptodate > + * so that an IO error is emitted (e.g. do_swap_page() will sigfault). > + * > + * Otherwise, return false and read the folio from disk. > + */ > + if (folio_test_large(folio)) { > + if (xa_find(tree, &offset, > + offset + folio_nr_pages(folio) - 1, XA_PRESENT)) { > + WARN_ON_ONCE(1); > + return true; > + } How does that work? Should it be xa_find_after() to not always find current entry? And does it still mean those subsequent entries map to same folio? --Mika