Re: [PATCH v2] mm: zswap: handle incorrect attempts to load of large folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 11, 2024 at 9:12 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
>
> On Mon, Jun 10, 2024 at 2:00 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> >
> > On Tue, Jun 11, 2024 at 4:12 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > >
> > > On Mon, Jun 10, 2024 at 1:06 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > > >
> > > > On Tue, Jun 11, 2024 at 1:42 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Fri, Jun 7, 2024 at 9:13 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > > > > >
> > > > > > On Sat, Jun 8, 2024 at 10:37 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > Zswap does not support storing or loading large folios. Until proper
> > > > > > > support is added, attempts to load large folios from zswap are a bug.
> > > > > > >
> > > > > > > For example, if a swapin fault observes that contiguous PTEs are
> > > > > > > pointing to contiguous swap entries and tries to swap them in as a large
> > > > > > > folio, swap_read_folio() will pass in a large folio to zswap_load(), but
> > > > > > > zswap_load() will only effectively load the first page in the folio. If
> > > > > > > the first page is not in zswap, the folio will be read from disk, even
> > > > > > > though other pages may be in zswap.
> > > > > > >
> > > > > > > In both cases, this will lead to silent data corruption. Proper support
> > > > > > > needs to be added before large folio swapins and zswap can work
> > > > > > > together.
> > > > > > >
> > > > > > > Looking at callers of swap_read_folio(), it seems like they are either
> > > > > > > allocated from __read_swap_cache_async() or do_swap_page() in the
> > > > > > > SWP_SYNCHRONOUS_IO path. Both of which allocate order-0 folios, so
> > > > > > > everything is fine for now.
> > > > > > >
> > > > > > > However, there is ongoing work to add to support large folio swapins
> > > > > > > [1]. To make sure new development does not break zswap (or get broken by
> > > > > > > zswap), add minimal handling of incorrect loads of large folios to
> > > > > > > zswap.
> > > > > > >
> > > > > > > First, move the call folio_mark_uptodate() inside zswap_load().
> > > > > > >
> > > > > > > If a large folio load is attempted, and any page in that folio is in
> > > > > > > zswap, return 'true' without calling folio_mark_uptodate(). This will
> > > > > > > prevent the folio from being read from disk, and will emit an IO error
> > > > > > > because the folio is not uptodate (e.g. do_swap_fault() will return
> > > > > > > VM_FAULT_SIGBUS). It may not be reliable recovery in all cases, but it
> > > > > > > is better than nothing.
> > > > > > >
> > > > > > > This was tested by hacking the allocation in __read_swap_cache_async()
> > > > > > > to use order 2 and __GFP_COMP.
> > > > > > >
> > > > > > > In the future, to handle this correctly, the swapin code should:
> > > > > > > (a) Fallback to order-0 swapins if zswap was ever used on the machine,
> > > > > > > because compressed pages remain in zswap after it is disabled.
> > > > > > > (b) Add proper support to swapin large folios from zswap (fully or
> > > > > > > partially).
> > > > > > >
> > > > > > > Probably start with (a) then followup with (b).
> > > > > > >
> > > > > > > [1]https://lore.kernel.org/linux-mm/20240304081348.197341-6-21cnbao@xxxxxxxxx/
> > > > > > >
> > > > > > > Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> > > > > > > ---
> > > > > > >
> > > > > > > v1: https://lore.kernel.org/lkml/20240606184818.1566920-1-yosryahmed@xxxxxxxxxx/
> > > > > > >
> > > > > > > v1 -> v2:
> > > > > > > - Instead of using VM_BUG_ON() use WARN_ON_ONCE() and add some recovery
> > > > > > >   handling (David Hildenbrand).
> > > > > > >
> > > > > > > ---
> > > > > > >  mm/page_io.c |  1 -
> > > > > > >  mm/zswap.c   | 22 +++++++++++++++++++++-
> > > > > > >  2 files changed, 21 insertions(+), 2 deletions(-)
> > > > > > >
> > > > > > > diff --git a/mm/page_io.c b/mm/page_io.c
> > > > > > > index f1a9cfab6e748..8f441dd8e109f 100644
> > > > > > > --- a/mm/page_io.c
> > > > > > > +++ b/mm/page_io.c
> > > > > > > @@ -517,7 +517,6 @@ void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
> > > > > > >         delayacct_swapin_start();
> > > > > > >
> > > > > > >         if (zswap_load(folio)) {
> > > > > > > -               folio_mark_uptodate(folio);
> > > > > > >                 folio_unlock(folio);
> > > > > > >         } else if (data_race(sis->flags & SWP_FS_OPS)) {
> > > > > > >                 swap_read_folio_fs(folio, plug);
> > > > > > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > > > > > index b9b35ef86d9be..ebb878d3e7865 100644
> > > > > > > --- a/mm/zswap.c
> > > > > > > +++ b/mm/zswap.c
> > > > > > > @@ -1557,6 +1557,26 @@ bool zswap_load(struct folio *folio)
> > > > > > >
> > > > > > >         VM_WARN_ON_ONCE(!folio_test_locked(folio));
> > > > > > >
> > > > > > > +       /*
> > > > > > > +        * Large folios should not be swapped in while zswap is being used, as
> > > > > > > +        * they are not properly handled. Zswap does not properly load large
> > > > > > > +        * folios, and a large folio may only be partially in zswap.
> > > > > > > +        *
> > > > > > > +        * If any of the subpages are in zswap, reading from disk would result
> > > > > > > +        * in data corruption, so return true without marking the folio uptodate
> > > > > > > +        * so that an IO error is emitted (e.g. do_swap_page() will sigfault).
> > > > > > > +        *
> > > > > > > +        * Otherwise, return false and read the folio from disk.
> > > > > > > +        */
> > > > > > > +       if (folio_test_large(folio)) {
> > > > > > > +               if (xa_find(tree, &offset,
> > > > > > > +                           offset + folio_nr_pages(folio) - 1, XA_PRESENT)) {
> > > > > > > +                       WARN_ON_ONCE(1);
> > > > > > > +                       return true;
> > > > > > > +               }
> > > > > > > +               return false;
> > > > > >
> > > > > > IMHO, this appears to be over-designed. Personally, I would opt to
> > > > > > use
> > > > > >
> > > > > >  if (folio_test_large(folio))
> > > > > >                return true;
> > > > >
> > > > > I am sure you mean "return false" here. Always returning true means we
> > > > > will never read a large folio from either zswap or disk, whether it's
> > > > > in zswap or not. Basically guaranteeing corrupting data for large
> > > > > folio swapin, even if zswap is disabled :)
> > > > >
> > > > > >
> > > > > > Before we address large folio support in zswap, it’s essential
> > > > > > not to let them coexist. Expecting valid data by lunchtime is
> > > > > > not advisable.
> > > > >
> > > > > The goal here is to enable development for large folio swapin without
> > > > > breaking zswap or being blocked on adding support in zswap. If we
> > > > > always return false for large folios, as you suggest, then even if the
> > > > > folio is in zswap (or parts of it), we will go read it from disk. This
> > > > > will result in silent data corruption.
> > > > >
> > > > > As you mentioned before, you spent a week debugging problems with your
> > > > > large folio swapin series because of a zswap problem, and even after
> > > > > then, the zswap_is_enabled() check you had is not enough to prevent
> > > > > problems as I mentioned before (if zswap was enabled before). So we
> > > > > need stronger checks to make sure we don't break things when we
> > > > > support large folio swapin.
> > > > >
> > > > > Since we can't just check if zswap is enabled or not, we need to
> > > > > rather check if the folio (or any part of it) is in zswap or not. We
> > > > > can only WARN in that case, but delivering the error to userspace is a
> > > > > couple of extra lines of code (not set uptodate), and will make the
> > > > > problem much easier to notice.
> > > > >
> > > > > I am not sure I understand what you mean. The alternative is to
> > > > > introduce a config option (perhaps internal) for large folio swapin,
> > > > > and make this depend on !CONFIG_ZSWAP, or make zswap refuse to get
> > > > > enabled if large folio swapin is enabled (through config or boot
> > > > > option). This is until proper handling is added, of course.
> > > >
> > > > Hi Yosry,
> > > > My point is that anybody attempts to do large folios swap-in should
> > > > either
> > > > 1. always use small folios if zswap has been once enabled before or now
> > > > or
> > > > 2. address the large folios swapin issues in zswap
> > > >
> > > > there is no 3rd way which you are providing.
> > > >
> > > > it is over-designed to give users true or false based on if data is zswap
> > > > as there is always a chance data could be in zswap. so before approach
> > > > 2 is done, we should always WARN_ON large folios and report data
> > > > corruption.
> > >
> > > We can't always WARN_ON for large folios, as this will fire even if
> > > zswap was never enabled. The alternative is tracking whether zswap was
> > > ever enabled, and checking that instead of checking if any part of the
> > > folio is in zswap.
> > >
> > > Basically replacing xa_find(..) with zswap_was_enabled(..) or something.
> >
> > My point is that mm core should always fallback
> >
> > if (zswap_was_or_is_enabled())
> >      goto fallback;
> >
> > till zswap fixes the issue. This is the only way to enable large folios swap-in
> > development before we fix zswap.
>
> I agree with this, I just want an extra fallback in zswap itself in
> case something was missed during large folio swapin development (which
> can evidently happen).

yes. then i feel we only need to warn_on the case mm-core fails to fallback.

I mean, only WARN_ON  is_zswap_ever_enabled&&large folio. there is no
need to do more. Before zswap brings up the large folio support, mm-core
will need is_zswap_ever_enabled() to do fallback.

diff --git a/include/linux/zswap.h b/include/linux/zswap.h
index 2a85b941db97..035e51ed89c4 100644
--- a/include/linux/zswap.h
+++ b/include/linux/zswap.h
@@ -36,6 +36,7 @@ void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
 void zswap_lruvec_state_init(struct lruvec *lruvec);
 void zswap_folio_swapin(struct folio *folio);
 bool is_zswap_enabled(void);
+bool is_zswap_ever_enabled(void);
 #else

 struct zswap_lruvec_state {};
@@ -65,6 +66,10 @@ static inline bool is_zswap_enabled(void)
        return false;
 }

+static inline bool is_zswap_ever_enabled(void)
+{
+       return false;
+}
 #endif

 #endif /* _LINUX_ZSWAP_H */
diff --git a/mm/zswap.c b/mm/zswap.c
index b9b35ef86d9b..bf2da5d37e47 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -86,6 +86,9 @@ static int zswap_setup(void);
 static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON);
 static int zswap_enabled_param_set(const char *,
                                   const struct kernel_param *);
+
+static bool zswap_ever_enable;
+
 static const struct kernel_param_ops zswap_enabled_param_ops = {
        .set =          zswap_enabled_param_set,
        .get =          param_get_bool,
@@ -136,6 +139,11 @@ bool is_zswap_enabled(void)
        return zswap_enabled;
 }

+bool is_zswap_ever_enabled(void)
+{
+       return zswap_enabled || zswap_ever_enabled;
+}
+
 /*********************************
 * data structures
 **********************************/
@@ -1734,6 +1742,7 @@ static int zswap_setup(void)
                pr_info("loaded using pool %s/%s\n", pool->tfm_name,
                        zpool_get_type(pool->zpools[0]));
                list_add(&pool->list, &zswap_pools);
+               zswap_ever_enabled = true;
                zswap_has_pool = true;
        } else {
                pr_err("pool creation failed\n");

Thanks
Barry





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux