Re: [PATCH v4] mm/rmap: do not add fully unmapped large folio to deferred split list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 26, 2024 at 4:19 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 25.04.24 23:11, Zi Yan wrote:
> > From: Zi Yan <ziy@xxxxxxxxxx>
> >
> > In __folio_remove_rmap(), a large folio is added to deferred split list
> > if any page in a folio loses its final mapping. But it is possible that
> > the folio is fully unmapped and adding it to deferred split list is
> > unnecessary.
> >
> > For PMD-mapped THPs, that was not really an issue, because removing the
> > last PMD mapping in the absence of PTE mappings would not have added the
> > folio to the deferred split queue.
> >
> > However, for PTE-mapped THPs, which are now more prominent due to mTHP,
> > they are always added to the deferred split queue. One side effect
> > is that the THP_DEFERRED_SPLIT_PAGE stat for a PTE-mapped folio can be
> > unintentionally increased, making it look like there are many partially
> > mapped folios -- although the whole folio is fully unmapped stepwise.
> >
> > Core-mm now tries batch-unmapping consecutive PTEs of PTE-mapped THPs
> > where possible starting from commit b06dc281aa99 ("mm/rmap: introduce
> > folio_remove_rmap_[pte|ptes|pmd]()"). When it happens, a whole PTE-mapped
> > folio is unmapped in one go and can avoid being added to deferred split
> > list, reducing the THP_DEFERRED_SPLIT_PAGE noise. But there will still be
> > noise when we cannot batch-unmap a complete PTE-mapped folio in one go
> > -- or where this type of batching is not implemented yet, e.g., migration.
> >
> > To avoid the unnecessary addition, folio->_nr_pages_mapped is checked
> > to tell if the whole folio is unmapped. If the folio is already on
> > deferred split list, it will be skipped, too.
> >
> > Note: commit 98046944a159 ("mm: huge_memory: add the missing
> > folio_test_pmd_mappable() for THP split statistics") tried to exclude
> > mTHP deferred split stats from THP_DEFERRED_SPLIT_PAGE, but it does not
> > fix the above issue. A fully unmapped PTE-mapped order-9 THP was still
> > added to deferred split list and counted as THP_DEFERRED_SPLIT_PAGE,
> > since nr is 512 (non zero), level is RMAP_LEVEL_PTE, and inside
> > deferred_split_folio() the order-9 folio is folio_test_pmd_mappable().
> >
> > Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
> > Reviewed-by: Yang Shi <shy828301@xxxxxxxxx>
> > ---
> >   mm/rmap.c | 8 +++++---
> >   1 file changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index a7913a454028..220ad8a83589 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1553,9 +1553,11 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
> >                * page of the folio is unmapped and at least one page
> >                * is still mapped.
> >                */
> > -             if (folio_test_large(folio) && folio_test_anon(folio))
> > -                     if (level == RMAP_LEVEL_PTE || nr < nr_pmdmapped)
> > -                             deferred_split_folio(folio);
> > +             if (folio_test_large(folio) && folio_test_anon(folio) &&
> > +                 list_empty(&folio->_deferred_list) &&
> > +                 ((level == RMAP_LEVEL_PTE && atomic_read(mapped)) ||
> > +                  (level == RMAP_LEVEL_PMD && nr < nr_pmdmapped)))
> > +                     deferred_split_folio(folio);
> >       }
> >
> >       /*
> >
> > base-commit: 66313c66dd90e8711a8b63fc047ddfc69c53636a
>
> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx>
>
> But maybe we can really improve the code:
>
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 2608c40dffade..e310b6c4221d7 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1495,6 +1495,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>   {
>          atomic_t *mapped = &folio->_nr_pages_mapped;
>          int last, nr = 0, nr_pmdmapped = 0;
> +       bool partially_mapped = false;
>          enum node_stat_item idx;
>
>          __folio_rmap_sanity_checks(folio, page, nr_pages, level);
> @@ -1515,6 +1516,8 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>                                          nr++;
>                          }
>                  } while (page++, --nr_pages > 0);
> +
> +               partially_mapped = nr && atomic_read(mapped);

nice!

>                  break;
>          case RMAP_LEVEL_PMD:
>                  atomic_dec(&folio->_large_mapcount);
> @@ -1532,6 +1535,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>                                  nr = 0;
>                          }
>                  }
> +               partially_mapped = nr < nr_pmdmapped;
>                  break;
>          }
>
> @@ -1553,9 +1557,9 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>                   * page of the folio is unmapped and at least one page
>                   * is still mapped.
>                   */
> -               if (folio_test_large(folio) && folio_test_anon(folio))
> -                       if (level == RMAP_LEVEL_PTE || nr < nr_pmdmapped)
> -                               deferred_split_folio(folio);
> +               if (folio_test_large(folio) && folio_test_anon(folio) &&
> +                   list_empty(&folio->_deferred_list) && partially_mapped)
> +                       deferred_split_folio(folio);
>          }
>
>          /*
>
> The compiler should be smart enough to optimize it all -- most likely ;)
>
> --
> Cheers,
>
> David / dhildenb
>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux