Re: [PATCH v2] mm: zswap: handle incorrect attempts to load of large folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jun 8, 2024 at 10:37 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
>
> Zswap does not support storing or loading large folios. Until proper
> support is added, attempts to load large folios from zswap are a bug.
>
> For example, if a swapin fault observes that contiguous PTEs are
> pointing to contiguous swap entries and tries to swap them in as a large
> folio, swap_read_folio() will pass in a large folio to zswap_load(), but
> zswap_load() will only effectively load the first page in the folio. If
> the first page is not in zswap, the folio will be read from disk, even
> though other pages may be in zswap.
>
> In both cases, this will lead to silent data corruption. Proper support
> needs to be added before large folio swapins and zswap can work
> together.
>
> Looking at callers of swap_read_folio(), it seems like they are either
> allocated from __read_swap_cache_async() or do_swap_page() in the
> SWP_SYNCHRONOUS_IO path. Both of which allocate order-0 folios, so
> everything is fine for now.
>
> However, there is ongoing work to add to support large folio swapins
> [1]. To make sure new development does not break zswap (or get broken by
> zswap), add minimal handling of incorrect loads of large folios to
> zswap.
>
> First, move the call folio_mark_uptodate() inside zswap_load().
>
> If a large folio load is attempted, and any page in that folio is in
> zswap, return 'true' without calling folio_mark_uptodate(). This will
> prevent the folio from being read from disk, and will emit an IO error
> because the folio is not uptodate (e.g. do_swap_fault() will return
> VM_FAULT_SIGBUS). It may not be reliable recovery in all cases, but it
> is better than nothing.
>
> This was tested by hacking the allocation in __read_swap_cache_async()
> to use order 2 and __GFP_COMP.
>
> In the future, to handle this correctly, the swapin code should:
> (a) Fallback to order-0 swapins if zswap was ever used on the machine,
> because compressed pages remain in zswap after it is disabled.
> (b) Add proper support to swapin large folios from zswap (fully or
> partially).
>
> Probably start with (a) then followup with (b).
>
> [1]https://lore.kernel.org/linux-mm/20240304081348.197341-6-21cnbao@xxxxxxxxx/
>
> Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> ---
>
> v1: https://lore.kernel.org/lkml/20240606184818.1566920-1-yosryahmed@xxxxxxxxxx/
>
> v1 -> v2:
> - Instead of using VM_BUG_ON() use WARN_ON_ONCE() and add some recovery
>   handling (David Hildenbrand).
>
> ---
>  mm/page_io.c |  1 -
>  mm/zswap.c   | 22 +++++++++++++++++++++-
>  2 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_io.c b/mm/page_io.c
> index f1a9cfab6e748..8f441dd8e109f 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -517,7 +517,6 @@ void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
>         delayacct_swapin_start();
>
>         if (zswap_load(folio)) {
> -               folio_mark_uptodate(folio);
>                 folio_unlock(folio);
>         } else if (data_race(sis->flags & SWP_FS_OPS)) {
>                 swap_read_folio_fs(folio, plug);
> diff --git a/mm/zswap.c b/mm/zswap.c
> index b9b35ef86d9be..ebb878d3e7865 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1557,6 +1557,26 @@ bool zswap_load(struct folio *folio)
>
>         VM_WARN_ON_ONCE(!folio_test_locked(folio));
>
> +       /*
> +        * Large folios should not be swapped in while zswap is being used, as
> +        * they are not properly handled. Zswap does not properly load large
> +        * folios, and a large folio may only be partially in zswap.
> +        *
> +        * If any of the subpages are in zswap, reading from disk would result
> +        * in data corruption, so return true without marking the folio uptodate
> +        * so that an IO error is emitted (e.g. do_swap_page() will sigfault).
> +        *
> +        * Otherwise, return false and read the folio from disk.
> +        */
> +       if (folio_test_large(folio)) {
> +               if (xa_find(tree, &offset,
> +                           offset + folio_nr_pages(folio) - 1, XA_PRESENT)) {
> +                       WARN_ON_ONCE(1);
> +                       return true;
> +               }
> +               return false;

IMHO, this appears to be over-designed. Personally, I would opt to
use

 if (folio_test_large(folio))
               return true;

Before we address large folio support in zswap, it’s essential
not to let them coexist. Expecting valid data by lunchtime is
not advisable.

> +       }
> +
>         /*
>          * When reading into the swapcache, invalidate our entry. The
>          * swapcache can be the authoritative owner of the page and
> @@ -1590,7 +1610,7 @@ bool zswap_load(struct folio *folio)
>                 zswap_entry_free(entry);
>                 folio_mark_dirty(folio);
>         }
> -
> +       folio_mark_uptodate(folio);
>         return true;
>  }
>
> --
> 2.45.2.505.gda0bf45e8d-goog
>

Thanks
Barry





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux