Re: [RFC PATCH 06/14] mm/khugepaged: add hugepage_vma_revalidate_pmd_count()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 8, 2022 at 1:34 PM Zach O'Keefe <zokeefe@xxxxxxxxxx> wrote:
>
> madvise collapse context operates on pmds in batch. We will want to
> be able to revalidate a region that spans multiple pmds in the same
> vma.
>
> Add hugepage_vma_revalidate_pmd_count() which extends
> hugepage_vma_revalidate() with number of pmds to revalidate.
> hugepage_vma_revalidate() now calls through this.
>
> Signed-off-by: Zach O'Keefe <zokeefe@xxxxxxxxxx>
> ---
>  mm/khugepaged.c | 26 ++++++++++++++++++--------
>  1 file changed, 18 insertions(+), 8 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 56f2ef7146c7..1d20be47bcea 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -964,18 +964,17 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node)
>  #endif
>
>  /*
> - * If mmap_lock temporarily dropped, revalidate vma
> - * before taking mmap_lock.
> - * Return 0 if succeeds, otherwise return none-zero
> - * value (scan code).
> + * Revalidate a vma's eligibility to collapse nr hugepages.
>   */
> -
> -static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> -               struct vm_area_struct **vmap)
> +static int hugepage_vma_revalidate_pmd_count(struct mm_struct *mm,
> +                                            unsigned long address, int nr,
> +                                            struct vm_area_struct **vmap)

Same again, better to have the new helper in the same patch with its users.

>  {
>         struct vm_area_struct *vma;
>         unsigned long hstart, hend;
>
> +       mmap_assert_locked(mm);
> +
>         if (unlikely(khugepaged_test_exit(mm)))
>                 return SCAN_ANY_PROCESS;
>
> @@ -985,7 +984,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
>
>         hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
>         hend = vma->vm_end & HPAGE_PMD_MASK;
> -       if (address < hstart || address + HPAGE_PMD_SIZE > hend)
> +       if (address < hstart || (address + nr * HPAGE_PMD_SIZE) > hend)
>                 return SCAN_ADDRESS_RANGE;
>         if (!hugepage_vma_check(vma, vma->vm_flags))
>                 return SCAN_VMA_CHECK;
> @@ -995,6 +994,17 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
>         return 0;
>  }
>
> +/*
> + * If mmap_lock temporarily dropped, revalidate vma before taking mmap_lock.
> + * Return 0 if succeeds, otherwise return none-zero value (scan code).
> + */
> +
> +static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> +                                  struct vm_area_struct **vmap)
> +{
> +       return hugepage_vma_revalidate_pmd_count(mm, address, 1, vmap);
> +}
> +
>  /*
>   * Bring missing pages in from swap, to complete THP collapse.
>   * Only done if khugepaged_scan_pmd believes it is worthwhile.
> --
> 2.35.1.616.g0bdcbb4464-goog
>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux