On 28/07/2019 13:33, Anshuman Khandual wrote: > > > On 07/22/2019 09:12 PM, Steven Price wrote: >> pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410 >> ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were >> no users. We're about to add users so reintroduce them, along with >> p4d_entry() as we now have 5 levels of tables. >> >> Note that commit a00cc7d9dd93d66a ("mm, x86: add support for >> PUD-sized transparent hugepages") already re-added pud_entry() but with >> different semantics to the other callbacks. Since there have never >> been upstream users of this, revert the semantics back to match the >> other callbacks. This means pud_entry() is called for all entries, not >> just transparent huge pages. >> >> Signed-off-by: Steven Price <steven.price@xxxxxxx> >> --- >> include/linux/mm.h | 15 +++++++++------ >> mm/pagewalk.c | 27 ++++++++++++++++----------- >> 2 files changed, 25 insertions(+), 17 deletions(-) >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index 0334ca97c584..b22799129128 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -1432,15 +1432,14 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, >> >> /** >> * mm_walk - callbacks for walk_page_range >> - * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry >> - * this handler should only handle pud_trans_huge() puds. >> - * the pmd_entry or pte_entry callbacks will be used for >> - * regular PUDs. >> - * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry >> + * @pgd_entry: if set, called for each non-empty PGD (top-level) entry >> + * @p4d_entry: if set, called for each non-empty P4D entry >> + * @pud_entry: if set, called for each non-empty PUD entry >> + * @pmd_entry: if set, called for each non-empty PMD entry >> * this handler is required to be able to handle >> * pmd_trans_huge() pmds. They may simply choose to >> * split_huge_page() instead of handling it explicitly. >> - * @pte_entry: if set, called for each non-empty PTE (4th-level) entry >> + * @pte_entry: if set, called for each non-empty PTE (lowest-level) entry >> * @pte_hole: if set, called for each hole at all levels >> * @hugetlb_entry: if set, called for each hugetlb entry >> * @test_walk: caller specific callback function to determine whether >> @@ -1455,6 +1454,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, >> * (see the comment on walk_page_range() for more details) >> */ >> struct mm_walk { >> + int (*pgd_entry)(pgd_t *pgd, unsigned long addr, >> + unsigned long next, struct mm_walk *walk); >> + int (*p4d_entry)(p4d_t *p4d, unsigned long addr, >> + unsigned long next, struct mm_walk *walk); >> int (*pud_entry)(pud_t *pud, unsigned long addr, >> unsigned long next, struct mm_walk *walk); >> int (*pmd_entry)(pmd_t *pmd, unsigned long addr, >> diff --git a/mm/pagewalk.c b/mm/pagewalk.c >> index c3084ff2569d..98373a9f88b8 100644 >> --- a/mm/pagewalk.c >> +++ b/mm/pagewalk.c >> @@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, >> } >> >> if (walk->pud_entry) { >> - spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma); >> - >> - if (ptl) { >> - err = walk->pud_entry(pud, addr, next, walk); >> - spin_unlock(ptl); >> - if (err) >> - break; >> - continue; >> - } >> + err = walk->pud_entry(pud, addr, next, walk); >> + if (err) >> + break; > > But will not this still encounter possible THP entries when walking user > page tables (valid walk->vma) in which case still needs to get a lock. > OR will the callback take care of it ? This is what I mean in the commit message by: > Since there have never > been upstream users of this, revert the semantics back to match the > other callbacks. This means pud_entry() is called for all entries, not > just transparent huge pages. So the expectation is that the caller takes care of it. However, having checked again, it appears that mm/hmm.c now does use this callback (merged in v5.2-rc1). Jérôme - are you happy with this change in semantics? It looks like hmm_vma_walk_pud() should deal gracefully with both normal and large pages - although I'm unsure whether you are relying on the lock from pud_trans_huge_lock()? Thanks, Steve