Re: [PATCH v9 1/2] mm/khugepaged: recover from poisoned anonymous memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 19, 2023 at 7:03 AM <kirill.shutemov@xxxxxxxxxxxxxxx> wrote:
>
> On Mon, Dec 05, 2022 at 03:40:58PM -0800, Jiaqi Yan wrote:
> > Make __collapse_huge_page_copy return whether copying anonymous pages
> > succeeded, and make collapse_huge_page handle the return status.
> >
> > Break existing PTE scan loop into two for-loops. The first loop copies
> > source pages into target huge page, and can fail gracefully when running
> > into memory errors in source pages. If copying all pages succeeds, the
> > second loop releases and clears up these normal pages. Otherwise, the
> > second loop rolls back the page table and page states by:
> > - re-establishing the original PTEs-to-PMD connection.
> > - releasing source pages back to their LRU list.
> >
> > Tested manually:
> > 0. Enable khugepaged on system under test.
> > 1. Start a two-thread application. Each thread allocates a chunk of
> >    non-huge anonymous memory buffer.
> > 2. Pick 4 random buffer locations (2 in each thread) and inject
> >    uncorrectable memory errors at corresponding physical addresses.
> > 3. Signal both threads to make their memory buffer collapsible, i.e.
> >    calling madvise(MADV_HUGEPAGE).
> > 4. Wait and check kernel log: khugepaged is able to recover from poisoned
> >    pages and skips collapsing them.
> > 5. Signal both threads to inspect their buffer contents and make sure no
> >    data corruption.
> >
> > Signed-off-by: Jiaqi Yan <jiaqiyan@xxxxxxxxxx>
> > ---
> >  include/trace/events/huge_memory.h |   3 +-
> >  mm/khugepaged.c                    | 179 ++++++++++++++++++++++-------
> >  2 files changed, 139 insertions(+), 43 deletions(-)
> >
> > diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> > index 35d759d3b0104..5743ae970af31 100644
> > --- a/include/trace/events/huge_memory.h
> > +++ b/include/trace/events/huge_memory.h
> > @@ -36,7 +36,8 @@
> >       EM( SCAN_ALLOC_HUGE_PAGE_FAIL,  "alloc_huge_page_failed")       \
> >       EM( SCAN_CGROUP_CHARGE_FAIL,    "ccgroup_charge_failed")        \
> >       EM( SCAN_TRUNCATED,             "truncated")                    \
> > -     EMe(SCAN_PAGE_HAS_PRIVATE,      "page_has_private")             \
> > +     EM( SCAN_PAGE_HAS_PRIVATE,      "page_has_private")             \
> > +     EMe(SCAN_COPY_MC,               "copy_poisoned_page")           \
> >
> >  #undef EM
> >  #undef EMe
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 5a7d2d5093f9c..0f1b9e05e17ec 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -19,6 +19,7 @@
> >  #include <linux/page_table_check.h>
> >  #include <linux/swapops.h>
> >  #include <linux/shmem_fs.h>
> > +#include <linux/kmsan.h>
> >
> >  #include <asm/tlb.h>
> >  #include <asm/pgalloc.h>
> > @@ -55,6 +56,7 @@ enum scan_result {
> >       SCAN_CGROUP_CHARGE_FAIL,
> >       SCAN_TRUNCATED,
> >       SCAN_PAGE_HAS_PRIVATE,
> > +     SCAN_COPY_MC,
> >  };
> >
> >  #define CREATE_TRACE_POINTS
> > @@ -530,6 +532,27 @@ static bool is_refcount_suitable(struct page *page)
> >       return page_count(page) == expected_refcount;
> >  }
> >
> > +/*
> > + * Copies memory with #MC in source page (@from) handled. Returns number
> > + * of bytes not copied if there was an exception; otherwise 0 for success.
> > + * Note handling #MC requires arch opt-in.
> > + */
> > +static int copy_mc_page(struct page *to, struct page *from)
> > +{
> > +     char *vfrom, *vto;
> > +     unsigned long ret;
> > +
> > +     vfrom = kmap_local_page(from);
> > +     vto = kmap_local_page(to);
> > +     ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE);
> > +     if (ret == 0)
> > +             kmsan_copy_page_meta(to, from);
> > +     kunmap_local(vto);
> > +     kunmap_local(vfrom);
> > +
> > +     return ret;
> > +}
>
>
> It is very similar to copy_mc_user_highpage(), but uses
> kmsan_copy_page_meta() instead of kmsan_unpoison_memory().
>
> Could you explain the difference? I don't quite get it.

copy_mc_page is actually the MC version of copy_highpage, which uses
kmsan_copy_page_meta instead of kmsan_unpoison_memory.

My understanding is kmsan_copy_page_meta covers kmsan_unpoison_memory.
When there is no metadata (kmsan_shadow or kmsan_origin), both
kmsan_copy_page_meta and kmsan_unpoison_memory just do
kmsan_internal_unpoison_memory to mark the memory range as
initialized; when there is metadata in src page, kmsan_copy_page_meta
will copy whatever metadata in src to dst. So I think
kmsan_copy_page_meta is the right thing to do.

>
> > +
> >  static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> >                                       unsigned long address,
> >                                       pte_t *pte,
> > @@ -670,56 +693,124 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> >       return result;
> >  }
> >
> > -static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
> > -                                   struct vm_area_struct *vma,
> > -                                   unsigned long address,
> > -                                   spinlock_t *ptl,
> > -                                   struct list_head *compound_pagelist)
> > +/*
> > + * __collapse_huge_page_copy - attempts to copy memory contents from normal
> > + * pages to a hugepage. Cleans up the normal pages if copying succeeds;
> > + * otherwise restores the original page table and releases isolated normal pages.
> > + * Returns SCAN_SUCCEED if copying succeeds, otherwise returns SCAN_COPY_MC.
> > + *
> > + * @pte: starting of the PTEs to copy from
> > + * @page: the new hugepage to copy contents to
> > + * @pmd: pointer to the new hugepage's PMD
> > + * @rollback: the original normal pages' PMD
> > + * @vma: the original normal pages' virtual memory area
> > + * @address: starting address to copy
> > + * @pte_ptl: lock on normal pages' PTEs
> > + * @compound_pagelist: list that stores compound pages
> > + */
> > +static int __collapse_huge_page_copy(pte_t *pte,
> > +                                  struct page *page,
> > +                                  pmd_t *pmd,
> > +                                  pmd_t rollback,
>
> I think 'orig_pmd' is a better name.

Will be renamed to orig_pmd in the next version v10.

>
> > +                                  struct vm_area_struct *vma,
> > +                                  unsigned long address,
> > +                                  spinlock_t *pte_ptl,
> > +                                  struct list_head *compound_pagelist)
> >  {
> >       struct page *src_page, *tmp;
> >       pte_t *_pte;
> > -     for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> > -                             _pte++, page++, address += PAGE_SIZE) {
> > -             pte_t pteval = *_pte;
> > +     pte_t pteval;
> > +     unsigned long _address;
> > +     spinlock_t *pmd_ptl;
> > +     int result = SCAN_SUCCEED;
> >
> > -             if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> > -                     clear_user_highpage(page, address);
> > -                     add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
> > -                     if (is_zero_pfn(pte_pfn(pteval))) {
> > +     /*
> > +      * Copying pages' contents is subject to memory poison at any iteration.
> > +      */
> > +     for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
> > +          _pte++, page++, _address += PAGE_SIZE) {
> > +             pteval = *_pte;
> > +
> > +             if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval)))
> > +                     clear_user_highpage(page, _address);
> > +             else {
> > +                     src_page = pte_page(pteval);
> > +                     if (copy_mc_page(page, src_page) > 0) {
> > +                             result = SCAN_COPY_MC;
> > +                             break;
> > +                     }
> > +             }
> > +     }
> > +
> > +     if (likely(result == SCAN_SUCCEED)) {
> > +             for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
> > +                  _pte++, _address += PAGE_SIZE) {
> > +                     pteval = *_pte;
> > +                     if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> > +                             add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
> > +                             if (is_zero_pfn(pte_pfn(pteval))) {
> > +                                     /*
> > +                                      * pte_ptl mostly unnecessary.
> > +                                      */
> > +                                     spin_lock(pte_ptl);
> > +                                     pte_clear(vma->vm_mm, _address, _pte);
> > +                                     spin_unlock(pte_ptl);
> > +                             }
> > +                     } else {
> > +                             src_page = pte_page(pteval);
> > +                             if (!PageCompound(src_page))
> > +                                     release_pte_page(src_page);
> >                               /*
> > -                              * ptl mostly unnecessary.
> > +                              * pte_ptl mostly unnecessary, but preempt has
> > +                              * to be disabled to update the per-cpu stats
> > +                              * inside page_remove_rmap().
> >                                */
> > -                             spin_lock(ptl);
> > -                             ptep_clear(vma->vm_mm, address, _pte);
> > -                             spin_unlock(ptl);
> > +                             spin_lock(pte_ptl);
> > +                             ptep_clear(vma->vm_mm, _address, _pte);
> > +                             page_remove_rmap(src_page, vma, false);
> > +                             spin_unlock(pte_ptl);
> > +                             free_page_and_swap_cache(src_page);
> > +                     }
> > +             }
> > +             list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) {
> > +                     list_del(&src_page->lru);
> > +                     mod_node_page_state(page_pgdat(src_page),
> > +                                     NR_ISOLATED_ANON + page_is_file_lru(src_page),
> > +                                     -compound_nr(src_page));
> > +                     unlock_page(src_page);
> > +                     free_swap_cache(src_page);
> > +                     putback_lru_page(src_page);
> > +             }
> > +     } else {
> > +             /*
> > +              * Re-establish the regular PMD that points to the regular
> > +              * page table. Restoring PMD needs to be done prior to
> > +              * releasing pages. Since pages are still isolated and
> > +              * locked here, acquiring anon_vma_lock_write is unnecessary.
> > +              */
> > +             pmd_ptl = pmd_lock(vma->vm_mm, pmd);
> > +             pmd_populate(vma->vm_mm, pmd, pmd_pgtable(rollback));
> > +             spin_unlock(pmd_ptl);
> > +             /*
> > +              * Release both raw and compound pages isolated
> > +              * in __collapse_huge_page_isolate.
> > +              */
> > +             for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
> > +                  _pte++, _address += PAGE_SIZE) {
> > +                     pteval = *_pte;
> > +                     if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval))) {
> > +                             src_page = pte_page(pteval);
> > +                             if (!PageCompound(src_page))
> > +                                     release_pte_page(src_page);
>
> Indentation levels get out of control. Maybe some code restructuring is
> required?

v10 will change to something like this to reduce 1 level of indentation:

    if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval)))
        continue;
    src_page = pte_page(pteval);
    if (!PageCompound(src_page))
        release_pte_page(src_page);

>
> >                       }
> > -             } else {
> > -                     src_page = pte_page(pteval);
> > -                     copy_user_highpage(page, src_page, address, vma);
> > -                     if (!PageCompound(src_page))
> > -                             release_pte_page(src_page);
> > -                     /*
> > -                      * ptl mostly unnecessary, but preempt has to
> > -                      * be disabled to update the per-cpu stats
> > -                      * inside page_remove_rmap().
> > -                      */
> > -                     spin_lock(ptl);
> > -                     ptep_clear(vma->vm_mm, address, _pte);
> > -                     page_remove_rmap(src_page, vma, false);
> > -                     spin_unlock(ptl);
> > -                     free_page_and_swap_cache(src_page);
> > +             }
> > +             list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) {
> > +                     list_del(&src_page->lru);
> > +                     release_pte_page(src_page);
> >               }
> >       }
> >
> > -     list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) {
> > -             list_del(&src_page->lru);
> > -             mod_node_page_state(page_pgdat(src_page),
> > -                                 NR_ISOLATED_ANON + page_is_file_lru(src_page),
> > -                                 -compound_nr(src_page));
> > -             unlock_page(src_page);
> > -             free_swap_cache(src_page);
> > -             putback_lru_page(src_page);
> > -     }
> > +     return result;
> >  }
> >
> >  static void khugepaged_alloc_sleep(void)
> > @@ -1079,9 +1170,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> >        */
> >       anon_vma_unlock_write(vma->anon_vma);
> >
> > -     __collapse_huge_page_copy(pte, hpage, vma, address, pte_ptl,
> > -                               &compound_pagelist);
> > +     result = __collapse_huge_page_copy(pte, hpage, pmd, _pmd,
> > +                                        vma, address, pte_ptl,
> > +                                        &compound_pagelist);
> >       pte_unmap(pte);
> > +     if (unlikely(result != SCAN_SUCCEED))
> > +             goto out_up_write;
> > +
> >       /*
> >        * spin_lock() below is not the equivalent of smp_wmb(), but
> >        * the smp_wmb() inside __SetPageUptodate() can be reused to
> > --
> > 2.39.0.rc0.267.gcb52ba06e7-goog
> >
>
> --
>   Kiryl Shutsemau / Kirill A. Shutemov




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux