On 1/26/21 11:21 AM, Joao Martins wrote: > On 1/26/21 6:08 PM, Mike Kravetz wrote: >> On 1/25/21 12:57 PM, Joao Martins wrote: >>> >>> +static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, >>> + int refs, struct page **pages, >>> + struct vm_area_struct **vmas) >>> +{ >>> + int nr; >>> + >>> + for (nr = 0; nr < refs; nr++) { >>> + if (likely(pages)) >>> + pages[nr] = page++; >>> + if (vmas) >>> + vmas[nr] = vma; >>> + } >>> +} >>> + >>> long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, >>> struct page **pages, struct vm_area_struct **vmas, >>> unsigned long *position, unsigned long *nr_pages, >>> @@ -4918,28 +4932,16 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, >>> continue; >>> } >>> >>> - refs = 0; >>> + refs = min3(pages_per_huge_page(h) - pfn_offset, >>> + (vma->vm_end - vaddr) >> PAGE_SHIFT, remainder); >>> >>> -same_page: >>> - if (pages) >>> - pages[i] = mem_map_offset(page, pfn_offset); >>> + if (pages || vmas) >>> + record_subpages_vmas(mem_map_offset(page, pfn_offset), >> >> The assumption made here is that mem_map is contiguous for the range of >> pages in the hugetlb page. I do not believe you can make this assumption >> for (gigantic) hugetlb pages which are > MAX_ORDER_NR_PAGES. For example, >> Thinking about this a bit more ... mem_map can be accessed contiguously if we have a virtual memmap. Correct? I suspect virtual memmap may be the most common configuration today. However, it seems we do need to handle other configurations. > That would mean get_user_pages_fast() and put_user_pages_fast() are broken for anything > handling PUDs or above? See record_subpages() in gup_huge_pud() or even gup_huge_pgd(). > It's using the same page++. Yes, I believe those would also have the issue. Cc: John and Jason as they have spent a significant amount of time in gup code recently. There may be something that makes that code safe? > This adjustment below probably is what you're trying to suggest. > > Also, nth_page() is slightly more expensive and so the numbers above change from ~4.4k > usecs to ~7.8k usecs. If my thoughts about virtual memmap are correct, then could we simply have a !vmemmap version of mem_map_offset (or similar routine) to avoid overhead? -- Mike Kravetz > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 1f7a95bc7c87..cf66f8c2f92a 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -4789,15 +4789,16 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, > goto out; > } > > -static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, > +static void record_subpages_vmas(struct page *page, unsigned long pfn_offset, > + struct vm_area_struct *vma, > int refs, struct page **pages, > struct vm_area_struct **vmas) > { > - int nr; > + unsigned long nr; > > for (nr = 0; nr < refs; nr++) { > if (likely(pages)) > - pages[nr] = page++; > + pages[nr] = mem_map_offset(page, pfn_offset + nr); > if (vmas) > vmas[nr] = vma; > } > @@ -4936,8 +4937,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct > *vma, > (vma->vm_end - vaddr) >> PAGE_SHIFT, remainder); > > if (pages || vmas) > - record_subpages_vmas(mem_map_offset(page, pfn_offset), > - vma, refs, > + record_subpages_vmas(page, pfn_offset, vma, refs, > likely(pages) ? pages + i : NULL, > vmas ? vmas + i : NULL); >