The patch titled Subject: hugetlb: take hugetlb vma_lock when clearing vma_lock->vma pointer has been added to the -mm mm-unstable branch. Its filename is hugetlb-take-hugetlb-vma_lock-when-clearing-vma_lock-vma-pointer.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/hugetlb-take-hugetlb-vma_lock-when-clearing-vma_lock-vma-pointer.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Subject: hugetlb: take hugetlb vma_lock when clearing vma_lock->vma pointer Date: Tue, 4 Oct 2022 18:17:06 -0700 hugetlb file truncation/hole punch code may need to back out and take locks in order in the routine hugetlb_unmap_file_folio(). This code could race with vma freeing as pointed out in [1] and result in accessing a stale vma pointer. To address this, take the vma_lock when clearing the vma_lock->vma pointer. [1] https://lore.kernel.org/linux-mm/01f10195-7088-4462-6def-909549c75ef4@xxxxxxxxxx/ Link: https://lkml.kernel.org/r/20221005011707.514612-3-mike.kravetz@xxxxxxxxxx Fixes: "hugetlb: use new vma_lock for pmd sharing synchronization" Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx> Cc: Axel Rasmussen <axelrasmussen@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Davidlohr Bueso <dave@xxxxxxxxxxxx> Cc: James Houghton <jthoughton@xxxxxxxxxx> Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Mina Almasry <almasrymina@xxxxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Naoya Horiguchi <naoya.horiguchi@xxxxxxxxx> Cc: Pasha Tatashin <pasha.tatashin@xxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Prakash Sangappa <prakash.sangappa@xxxxxxxxxx> Cc: Sven Schnelle <svens@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 38 ++++++++++++++++++++++++++++---------- 1 file changed, 28 insertions(+), 10 deletions(-) --- a/mm/hugetlb.c~hugetlb-take-hugetlb-vma_lock-when-clearing-vma_lock-vma-pointer +++ a/mm/hugetlb.c @@ -93,6 +93,7 @@ struct mutex *hugetlb_fault_mutex_table static int hugetlb_acct_memory(struct hstate *h, long delta); static void hugetlb_vma_lock_free(struct vm_area_struct *vma); static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma); +static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); static inline bool subpool_is_free(struct hugepage_subpool *spool) { @@ -5192,8 +5193,7 @@ void __unmap_hugepage_range_final(struct * be asynchrously deleted. If the page tables are shared, there * will be issues when accessed by someone else. */ - hugetlb_vma_unlock_write(vma); - hugetlb_vma_lock_free(vma); + __hugetlb_vma_unlock_write_free(vma); i_mmap_unlock_write(vma->vm_file->f_mapping); } @@ -6832,6 +6832,30 @@ void hugetlb_vma_lock_release(struct kre kfree(vma_lock); } +void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock) +{ + struct vm_area_struct *vma = vma_lock->vma; + + /* + * vma_lock structure may or not be released as a result of put, + * it certainly will no longer be attached to vma so clear pointer. + * Semaphore synchronizes access to vma_lock->vma field. + */ + vma_lock->vma = NULL; + vma->vm_private_data = NULL; + up_write(&vma_lock->rw_sema); + kref_put(&vma_lock->refs, hugetlb_vma_lock_release); +} + +void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma) +{ + if (__vma_shareable_flags_pmd(vma)) { + struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; + + __hugetlb_vma_unlock_write_put(vma_lock); + } +} + static void hugetlb_vma_lock_free(struct vm_area_struct *vma) { /* @@ -6843,14 +6867,8 @@ static void hugetlb_vma_lock_free(struct if (vma->vm_private_data) { struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - /* - * vma_lock structure may or not be released, but it - * certainly will no longer be attached to vma so clear - * pointer. - */ - vma_lock->vma = NULL; - kref_put(&vma_lock->refs, hugetlb_vma_lock_release); - vma->vm_private_data = NULL; + down_write(&vma_lock->rw_sema); + __hugetlb_vma_unlock_write_put(vma_lock); } } _ Patches currently in -mm which might be from mike.kravetz@xxxxxxxxxx are hugetlb-fix-vma-lock-handling-during-split-vma-and-range-unmapping.patch hugetlb-take-hugetlb-vma_lock-when-clearing-vma_lock-vma-pointer.patch hugetlb-take-hugetlb-vma_lock-when-clearing-vma_lock-vma-pointer-fix.patch hugetlb-allocate-vma-lock-for-all-sharable-vmas.patch hugetlb-simplify-hugetlb-handling-in-follow_page_mask.patch