On Wed, Jun 09, 2021 at 11:36:36PM -0700, Hugh Dickins wrote: > page_vma_mapped_walk() cleanup: get the hugetlbfs PageHuge case > out of the way at the start, so no need to worry about it later. > > Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > --- > mm/page_vma_mapped.c | 12 ++++++++---- > 1 file changed, 8 insertions(+), 4 deletions(-) > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index a6dbf714ca15..7c0504641fb8 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -153,10 +153,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > if (pvmw->pmd && !pvmw->pte) > return not_found(pvmw); > > - if (pvmw->pte) > - goto next_pte; > - > if (unlikely(PageHuge(page))) { > + /* The only possible mapping was handled on last iteration */ > + if (pvmw->pte) > + return not_found(pvmw); > + > /* when pud is not present, pte will be NULL */ > pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page)); > if (!pvmw->pte) Would it be even nicer to move the initial check to be after PageHuge() too? if (pvmw->pmd && !pvmw->pte) return not_found(pvmw); It looks already better, so no strong opinion. Reviewed-by: Peter Xu <peterx@xxxxxxxxxx> Thanks, -- Peter Xu