On 9/25/19 5:35 PM, Wei Yang wrote: > On Wed, Sep 25, 2019 at 10:44:58AM -0700, Mike Kravetz wrote: >> On 9/25/19 5:18 AM, Wei Yang wrote: >>> The warning here is to make sure address(dst_addr) and length(len - >>> copied) are huge page size aligned. >>> >>> While this is ensured by: >>> >>> dst_start and len is huge page size aligned >>> dst_addr equals to dst_start and increase huge page size each time >>> copied increase huge page size each time >> >> Can we also remove the following for the same reasons? >> >> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c >> index 640ff2bd9a69..f82d5ec698d8 100644 >> --- a/mm/userfaultfd.c >> +++ b/mm/userfaultfd.c >> @@ -262,7 +262,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, >> pte_t dst_pteval; >> >> BUG_ON(dst_addr >= dst_start + len); >> - VM_BUG_ON(dst_addr & ~huge_page_mask(h)); >> > > Thanks for your comment. > > It looks good, while I lack some knowledge between vma_hpagesize and > huge_page_mask(). vma_hpagesize is just a local variable used so that repeated calls to vma_kernel_pagesize() or huge_page_size() are not necessary. > If they are the same, why not use the same interface for all those checks in > this function? If we remove the VM_BUG_ON, that is the only use of huge_page_mask() in the function. We can can also eliminate a call to huge_page_size() by making this change. @@ -273,7 +272,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, mutex_lock(&hugetlb_fault_mutex_table[hash]); err = -ENOMEM; - dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h)); + dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize); if (!dst_pte) { mutex_unlock(&hugetlb_fault_mutex_table[hash]); goto out_unlock; -- Mike Kravetz