The patch titled Subject: mm: always lock new vma before inserting into vma tree has been added to the -mm mm-unstable branch. Its filename is mm-always-lock-new-vma-before-inserting-into-vma-tree.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-always-lock-new-vma-before-inserting-into-vma-tree.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Suren Baghdasaryan <surenb@xxxxxxxxxx> Subject: mm: always lock new vma before inserting into vma tree Date: Fri, 4 Aug 2023 08:27:23 -0700 While it's not strictly necessary to lock a newly created vma before adding it into the vma tree (as long as no further changes are performed to it), it seems like a good policy to lock it and prevent accidental changes after it becomes visible to the page faults. Lock the vma before adding it into the vma tree. Link: https://lkml.kernel.org/r/20230804152724.3090321-6-surenb@xxxxxxxxxx Suggested-by: Jann Horn <jannh@xxxxxxxxxx> Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> Reviewed-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mmap.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) --- a/mm/mmap.c~mm-always-lock-new-vma-before-inserting-into-vma-tree +++ a/mm/mmap.c @@ -403,6 +403,8 @@ static int vma_link(struct mm_struct *mm vma_iter_store(&vmi, vma); + vma_start_write(vma); + if (vma->vm_file) { mapping = vma->vm_file->f_mapping; i_mmap_lock_write(mapping); @@ -463,7 +465,8 @@ static inline void vma_prepare(struct vm vma_start_write(vp->vma); if (vp->adj_next) vma_start_write(vp->adj_next); - /* vp->insert is always a newly created VMA, no need for locking */ + if (vp->insert) + vma_start_write(vp->insert); if (vp->remove) vma_start_write(vp->remove); if (vp->remove2) @@ -3093,6 +3096,7 @@ static int do_brk_flags(struct vma_itera vma->vm_pgoff = addr >> PAGE_SHIFT; vm_flags_init(vma, flags); vma->vm_page_prot = vm_get_page_prot(flags); + vma_start_write(vma); if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) goto mas_store_fail; @@ -3341,7 +3345,6 @@ struct vm_area_struct *copy_vma(struct v get_file(new_vma->vm_file); if (new_vma->vm_ops && new_vma->vm_ops->open) new_vma->vm_ops->open(new_vma); - vma_start_write(new_vma); if (vma_link(mm, new_vma)) goto out_vma_link; *need_rmap_locks = false; _ Patches currently in -mm which might be from surenb@xxxxxxxxxx are mm-enable-page-walking-api-to-lock-vmas-during-the-walk.patch swap-remove-remnants-of-polling-from-read_swap_cache_async.patch mm-add-missing-vm_fault_result_trace-name-for-vm_fault_completed.patch mm-drop-per-vma-lock-when-returning-vm_fault_retry-or-vm_fault_completed.patch mm-change-folio_lock_or_retry-to-use-vm_fault-directly.patch mm-handle-swap-page-faults-under-per-vma-lock.patch mm-handle-userfaults-under-vma-lock.patch mm-handle-userfaults-under-vma-lock-fix.patch mm-for-config_per_vma_lock-equate-write-lock-assertion-for-vma-and-mmap.patch mm-replace-mmap-with-vma-write-lock-assertions-when-operating-on-a-vma.patch mm-lock-vma-explicitly-before-doing-vm_flags_reset-and-vm_flags_reset_once.patch mm-always-lock-new-vma-before-inserting-into-vma-tree.patch mm-move-vma-locking-out-of-vma_prepare-and-dup_anon_vma.patch