Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>于2022年5月12日 周四下午4:39写道:
On Thu, 12 May 2022 10:45:51 -0700 Yang Shi <shy828301@xxxxxxxxx> wrote:
> IIUC PVMW checks if the vma is possibly huge PMD mapped by
> transparent_hugepage_active() and "pvmw->nr_pages >= HPAGE_PMD_NR".
>
> Actually pvmw->nr_pages is returned by compound_nr() or
> folio_nr_pages(), so the page should be THP as long as "pvmw->nr_pages
> >= HPAGE_PMD_NR". And it is guaranteed THP is allocated for valid VMA
> in the first place. But it may be not PMD mapped if the VMA is file
> VMA and it is not properly aligned. The transhuge_vma_suitable()
> is used to do such check, so replace transparent_hugepage_active() to
> it, which is too heavy and overkilling.
>
> ...
>
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -237,13 +237,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> spin_unlock(pvmw->ptl);
> pvmw->ptl = NULL;
> } else if (!pmd_present(pmde)) {
> + unsigned long haddr = pvmw->address & HPAGE_PMD_MASK;
This hits
#define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
when CONFIG_TRANSPARENT_HUGEPAGE=n (x86_64 allnoconfig).
Thanks for catching this. I think the best way is to round the address in transhuge_vma_suitable() which is protected by the config.
Will prepare v2 soon.