On 7/13/21 9:29 PM, Mike Kravetz wrote: > On 7/13/21 8:24 AM, Joao Martins wrote: >> commit 82e5d378b0e47 ("mm/hugetlb: refactor subpage recording") >> refactored the count of subpages but missed an edge case when @vaddr is >> not aligned to PAGE_SIZE e.g. when close to vma->vm_end. It would then >> errousnly set @refs to 0 and record_subpages_vmas() wouldn't set the >> @pages array element to its value, consequently causing the reported >> null-deref by syzbot. >> >> Fix it by aligning down @vaddr by PAGE_SIZE in @refs calculation. > > Thanks for finding and fixing! > >> >> Reported-by: syzbot+a3fcd59df1b372066f5a@xxxxxxxxxxxxxxxxxxxxxxxxx >> Fixes: 82e5d378b0e47 ("mm/hugetlb: refactor subpage recording") >> Signed-off-by: Joao Martins <joao.m.martins@xxxxxxxxxx> >> --- >> An alternate approach is to have record_subpages_vmas() iterate while >> addr < vm_end and renaming @refs to nr_pages (which would limit how many >> pages we should store). But I felt that this approach would be slightly >> more convoluted? > > I prefer the approach you have taken in this patch. > OK. >> >> Side-Note: I could add a WARN_ON_ONCE(!refs) and/or create an >> helper like vma_pages() but with a ulong addr argument e.g. >> vma_pages_from(vma, vaddr). > > IIUC, the only way refs could be zero is if there was error in > caluclations within this routine. Correct? Right. Albeit vaddr initialization is originally set with gup() starting address. > IMO, the only reason to add a warning would be if there are any assumptions > based on things outside this routine which could cause refs to be zero. > /me nods >> The syzbot repro no longer reproduces after this patch. Additionally, ran >> the libhugetlbfs tests (which were passing without this), gup_test and an >> extra gup_test extension that take an offset to exercise gup() starting >> address not being page aligned. >> --- >> mm/hugetlb.c | 5 +++-- >> 1 file changed, 3 insertions(+), 2 deletions(-) >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 924553aa8f78..dfc940d5221d 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -5440,8 +5440,9 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, >> continue; >> } >> >> - refs = min3(pages_per_huge_page(h) - pfn_offset, >> - (vma->vm_end - vaddr) >> PAGE_SHIFT, remainder); >> + /* vaddr may not be aligned to PAGE_SIZE */ >> + refs = min3(pages_per_huge_page(h) - pfn_offset, remainder, >> + (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT); >> >> if (pages || vmas) >> record_subpages_vmas(mem_map_offset(page, pfn_offset), >> > > Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > Thanks!