On 7/24/19 6:39 PM, Navid Emamdoost wrote: > find_vma may fail and return NULL. The null check is added. > > Signed-off-by: Navid Emamdoost <navid.emamdoost@xxxxxxxxx> > --- > mm/hugetlb.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ede7e7f5d1ab..9c5e8b7a6476 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -4743,6 +4743,9 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, > pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) > { > struct vm_area_struct *vma = find_vma(mm, addr); > + if (!vma) > + return (pte_t *)pmd_alloc(mm, pud, addr); > + Hello Navid, You should not mix declarations and code like this. I am surprised that your compiler did not issue a warning such as: mm/hugetlb.c: In function ‘huge_pmd_share’: mm/hugetlb.c:4815:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement] struct address_space *mapping = vma->vm_file->f_mapping; ^~~~~~ While it is true that the routine find_vma can return NULL. I do not believe it is possible here within the context of huge_pmd_share. Why? huge_pmd_share is called from huge_pte_alloc to allocate a page table entry for a huge page. So, the calling code is attempting to populate page tables. There are three callers of huge_pte_alloc: hugetlb_fault, copy_hugetlb_page_range and __mcopy_atomic_hugetlb. In each of these routines (or their callers) it has been verified that address is within a vma. In addition, mmap_sem is held so that vmas can not change. Therefore, there should be no way for find_vma to return NULL here. Please let me know if there is something I have overlooked. Otherwise, there is no need for such a modification. -- Mike Kravetz > struct address_space *mapping = vma->vm_file->f_mapping; > pgoff_t idx = ((addr - vma->vm_start) >> PAGE_SHIFT) + > vma->vm_pgoff; >