On 2022/9/22 3:07, Andrew Morton wrote: > On Wed, 21 Sep 2022 16:34:40 +0800 Liu Shixin <liushixin2@xxxxxxxxxx> wrote: > >> The vma_lock and hugetlb_fault_mutex are dropped before handling >> userfault and reacquire them again after handle_userfault(), but >> reacquire the vma_lock could lead to UAF[1] due to the following >> race, >> >> hugetlb_fault >> hugetlb_no_page >> /*unlock vma_lock */ >> hugetlb_handle_userfault >> handle_userfault >> /* unlock mm->mmap_lock*/ >> vm_mmap_pgoff >> do_mmap >> mmap_region >> munmap_vma_range >> /* clean old vma */ >> /* lock vma_lock again <--- UAF */ >> /* unlock vma_lock */ >> >> Since the vma_lock will unlock immediately after hugetlb_handle_userfault(), >> let's drop the unneeded lock and unlock in hugetlb_handle_userfault() to fix >> the issue. >> >> @@ -5508,17 +5507,12 @@ static inline vm_fault_t hugetlb_handle_userfault(struct vm_area_struct *vma, >> >> /* >> * vma_lock and hugetlb_fault_mutex must be >> - * dropped before handling userfault. Reacquire >> - * after handling fault to make calling code simpler. >> + * dropped before handling userfault. >> */ >> hugetlb_vma_unlock_read(vma); >> hash = hugetlb_fault_mutex_hash(mapping, idx); >> mutex_unlock(&hugetlb_fault_mutex_table[hash]); >> - ret = handle_userfault(&vmf, reason); >> - mutex_lock(&hugetlb_fault_mutex_table[hash]); >> - hugetlb_vma_lock_read(vma); >> - >> - return ret; >> + return handle_userfault(&vmf, reason); >> } > Current code is rather different from this. So if the bug still exists > in current code, please verify this and redo the patch appropriately? > > And hang on to this version to help with the -stable backporting. > > Thanks. > . This patch conflicts with patch series "hugetlb: Use new vma lock for huge pmd sharing synchronization". So I reproduce the problem on next-20220920 and this patch is based on next-20220920 instead of mainline. This problem is existed since v4.11. I will send the stable version later.