[folded-merged] mm-handle-userfaults-under-vma-lock-fix.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: mm: fix a lockdep issue in vma_assert_write_locked
has been removed from the -mm tree.  Its filename was
     mm-handle-userfaults-under-vma-lock-fix.patch

This patch was dropped because it was folded into mm-handle-userfaults-under-vma-lock.patch

------------------------------------------------------
From: Suren Baghdasaryan <surenb@xxxxxxxxxx>
Subject: mm: fix a lockdep issue in vma_assert_write_locked
Date: Wed, 12 Jul 2023 12:56:52 -0700

__is_vma_write_locked() can be used only when mmap_lock is write-locked to
guarantee vm_lock_seq and mm_lock_seq stability during the check. 
Therefore it asserts this condition before further checks.  Because of
that it can't be used unless the user expects the mmap_lock to be
write-locked.  vma_assert_locked() can't assume this before ensuring that
VMA is not read-locked.

Change the order of the checks in vma_assert_locked() to check if the VMA
is read-locked first and only then assert if it's not write-locked.

Link: https://lkml.kernel.org/r/20230712195652.969194-1-surenb@xxxxxxxxxx
Fixes: 50b88b63e3e4 ("mm: handle userfaults under VMA lock")
Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
Reported-by: Liam R. Howlett <liam.howlett@xxxxxxxxxx>
Closes: https://lore.kernel.org/all/20230712022620.3yytbdh24b7i4zrn@revolver/
Reported-by: syzbot+339b02f826caafd5f7a8@xxxxxxxxxxxxxxxxxxxxxxxxx
Closes: https://lore.kernel.org/all/0000000000002db68f05ffb791bc@xxxxxxxxxx/
Cc: Christian Brauner <brauner@xxxxxxxxxx>
Cc: Laurent Dufour <ldufour@xxxxxxxxxxxxx>
Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Cc: Michel Lespinasse <michel@xxxxxxxxxxxxxx>
Cc: Paul E. McKenney <paulmck@xxxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 include/linux/mm.h |   16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

--- a/include/linux/mm.h~mm-handle-userfaults-under-vma-lock-fix
+++ a/include/linux/mm.h
@@ -679,6 +679,7 @@ static inline void vma_end_read(struct v
 	rcu_read_unlock();
 }
 
+/* WARNING! Can only be used if mmap_lock is expected to be write-locked */
 static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq)
 {
 	mmap_assert_write_locked(vma->vm_mm);
@@ -714,22 +715,17 @@ static inline void vma_start_write(struc
 	up_write(&vma->vm_lock->lock);
 }
 
-static inline void vma_assert_locked(struct vm_area_struct *vma)
+static inline void vma_assert_write_locked(struct vm_area_struct *vma)
 {
 	int mm_lock_seq;
 
-	if (__is_vma_write_locked(vma, &mm_lock_seq))
-		return;
-
-	lockdep_assert_held(&vma->vm_lock->lock);
-	VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma);
+	VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
 }
 
-static inline void vma_assert_write_locked(struct vm_area_struct *vma)
+static inline void vma_assert_locked(struct vm_area_struct *vma)
 {
-	int mm_lock_seq;
-
-	VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
+	if (!rwsem_is_locked(&vma->vm_lock->lock))
+		vma_assert_write_locked(vma);
 }
 
 static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached)
_

Patches currently in -mm which might be from surenb@xxxxxxxxxx are

swap-remove-remnants-of-polling-from-read_swap_cache_async.patch
mm-add-missing-vm_fault_result_trace-name-for-vm_fault_completed.patch
mm-drop-per-vma-lock-when-returning-vm_fault_retry-or-vm_fault_completed.patch
mm-change-folio_lock_or_retry-to-use-vm_fault-directly.patch
mm-handle-swap-page-faults-under-per-vma-lock.patch
mm-handle-userfaults-under-vma-lock.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux