On Thu, Jan 05, 2023 at 10:18:17AM +0000, James Houghton wrote: > @@ -6731,22 +6746,22 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > * and skip the same_page loop below. > */ > if (!pages && !vmas && !pfn_offset && > - (vaddr + huge_page_size(h) < vma->vm_end) && > - (remainder >= pages_per_huge_page(h))) { > - vaddr += huge_page_size(h); > - remainder -= pages_per_huge_page(h); > - i += pages_per_huge_page(h); > + (vaddr + pages_per_hpte < vma->vm_end) && > + (remainder >= pages_per_hpte)) { > + vaddr += pages_per_hpte; This silently breaks hugetlb GUP.. should be vaddr += hugetlb_pte_size(&hpte); It caused misterious MISSING events when I'm playing with this tree, and I'm surprised it rooted here. So far the most time consuming one. :) > + remainder -= pages_per_hpte; > + i += pages_per_hpte; > spin_unlock(ptl); > hugetlb_vma_unlock_read(vma); > continue; > } -- Peter Xu