The patch titled Subject: mm: replace mmap with vma write lock assertions when operating on a vma has been added to the -mm mm-unstable branch. Its filename is mm-replace-mmap-with-vma-write-lock-assertions-when-operating-on-a-vma.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-replace-mmap-with-vma-write-lock-assertions-when-operating-on-a-vma.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Suren Baghdasaryan <surenb@xxxxxxxxxx> Subject: mm: replace mmap with vma write lock assertions when operating on a vma Date: Fri, 4 Aug 2023 08:27:21 -0700 Vma write lock assertion always includes mmap write lock assertion and additional vma lock checks when per-VMA locks are enabled. Replace weaker mmap_assert_write_locked() assertions with stronger vma_assert_write_locked() ones when we are operating on a vma which is expected to be locked. Link: https://lkml.kernel.org/r/20230804152724.3090321-4-surenb@xxxxxxxxxx Suggested-by: Jann Horn <jannh@xxxxxxxxxx> Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> Reviewed-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 2 +- mm/memory.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) --- a/mm/hugetlb.c~mm-replace-mmap-with-vma-write-lock-assertions-when-operating-on-a-vma +++ a/mm/hugetlb.c @@ -5056,7 +5056,7 @@ int copy_hugetlb_page_range(struct mm_st src_vma->vm_start, src_vma->vm_end); mmu_notifier_invalidate_range_start(&range); - mmap_assert_write_locked(src); + vma_assert_write_locked(src_vma); raw_write_seqcount_begin(&src->write_protect_seq); } else { /* --- a/mm/memory.c~mm-replace-mmap-with-vma-write-lock-assertions-when-operating-on-a-vma +++ a/mm/memory.c @@ -1312,7 +1312,7 @@ copy_page_range(struct vm_area_struct *d * Use the raw variant of the seqcount_t write API to avoid * lockdep complaining about preemptibility. */ - mmap_assert_write_locked(src_mm); + vma_assert_write_locked(src_vma); raw_write_seqcount_begin(&src_mm->write_protect_seq); } _ Patches currently in -mm which might be from surenb@xxxxxxxxxx are mm-enable-page-walking-api-to-lock-vmas-during-the-walk.patch swap-remove-remnants-of-polling-from-read_swap_cache_async.patch mm-add-missing-vm_fault_result_trace-name-for-vm_fault_completed.patch mm-drop-per-vma-lock-when-returning-vm_fault_retry-or-vm_fault_completed.patch mm-change-folio_lock_or_retry-to-use-vm_fault-directly.patch mm-handle-swap-page-faults-under-per-vma-lock.patch mm-handle-userfaults-under-vma-lock.patch mm-handle-userfaults-under-vma-lock-fix.patch mm-for-config_per_vma_lock-equate-write-lock-assertion-for-vma-and-mmap.patch mm-replace-mmap-with-vma-write-lock-assertions-when-operating-on-a-vma.patch mm-lock-vma-explicitly-before-doing-vm_flags_reset-and-vm_flags_reset_once.patch mm-always-lock-new-vma-before-inserting-into-vma-tree.patch mm-move-vma-locking-out-of-vma_prepare-and-dup_anon_vma.patch