On Thu, Mar 21, 2019 at 02:19:44PM +0000, Steven Price wrote: > pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410 > ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were > no users. We're about to add users so reintroduce them, along with > p4d_entry() as we now have 5 levels of tables. > > Note that commit a00cc7d9dd93d66a ("mm, x86: add support for > PUD-sized transparent hugepages") already re-added pud_entry() but with > different semantics to the other callbacks. Since there have never > been upstream users of this, revert the semantics back to match the > other callbacks. This means pud_entry() is called for all entries, not > just transparent huge pages. > > Signed-off-by: Steven Price <steven.price@xxxxxxx> > --- > include/linux/mm.h | 9 ++++++--- > mm/pagewalk.c | 27 ++++++++++++++++----------- > 2 files changed, 22 insertions(+), 14 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 76769749b5a5..2983f2396a72 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1367,10 +1367,9 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, > > /** > * mm_walk - callbacks for walk_page_range > + * @pgd_entry: if set, called for each non-empty PGD (top-level) entry > + * @p4d_entry: if set, called for each non-empty P4D (1st-level) entry IMHO, p4d implies the 4th level :) I think it would make more sense to start counting from PTE rather than from PGD. Then it would be consistent across architectures with fewer levels. > * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry > - * this handler should only handle pud_trans_huge() puds. > - * the pmd_entry or pte_entry callbacks will be used for > - * regular PUDs. > * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry > * this handler is required to be able to handle > * pmd_trans_huge() pmds. They may simply choose to > @@ -1390,6 +1389,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, > * (see the comment on walk_page_range() for more details) > */ > struct mm_walk { > + int (*pgd_entry)(pgd_t *pgd, unsigned long addr, > + unsigned long next, struct mm_walk *walk); > + int (*p4d_entry)(p4d_t *p4d, unsigned long addr, > + unsigned long next, struct mm_walk *walk); > int (*pud_entry)(pud_t *pud, unsigned long addr, > unsigned long next, struct mm_walk *walk); > int (*pmd_entry)(pmd_t *pmd, unsigned long addr, > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > index c3084ff2569d..98373a9f88b8 100644 > --- a/mm/pagewalk.c > +++ b/mm/pagewalk.c > @@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, > } > > if (walk->pud_entry) { > - spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma); > - > - if (ptl) { > - err = walk->pud_entry(pud, addr, next, walk); > - spin_unlock(ptl); > - if (err) > - break; > - continue; > - } > + err = walk->pud_entry(pud, addr, next, walk); > + if (err) > + break; > } > > split_huge_pud(walk->vma, pud, addr); > @@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, > break; > continue; > } > - if (walk->pmd_entry || walk->pte_entry) > + if (walk->p4d_entry) { > + err = walk->p4d_entry(p4d, addr, next, walk); > + if (err) > + break; > + } > + if (walk->pud_entry || walk->pmd_entry || walk->pte_entry) > err = walk_pud_range(p4d, addr, next, walk); > if (err) > break; > @@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, > break; > continue; > } > - if (walk->pmd_entry || walk->pte_entry) > + if (walk->pgd_entry) { > + err = walk->pgd_entry(pgd, addr, next, walk); > + if (err) > + break; > + } > + if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry || > + walk->pte_entry) > err = walk_p4d_range(pgd, addr, next, walk); > if (err) > break; > -- > 2.20.1 > -- Sincerely yours, Mike.