On Wed, Jul 12, 2023 at 12:56 PM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > __is_vma_write_locked() can be used only when mmap_lock is write-locked > to guarantee vm_lock_seq and mm_lock_seq stability during the check. > Therefore it asserts this condition before further checks. Because of > that it can't be used unless the user expects the mmap_lock to be > write-locked. vma_assert_locked() can't assume this before ensuring > that VMA is not read-locked. > Change the order of the checks in vma_assert_locked() to check if the > VMA is read-locked first and only then assert if it's not write-locked. > > Fixes: 50b88b63e3e4 ("mm: handle userfaults under VMA lock") > Reported-by: Liam R. Howlett <liam.howlett@xxxxxxxxxx> > Closes: https://lore.kernel.org/all/20230712022620.3yytbdh24b7i4zrn@revolver/ > Reported-by: syzbot+339b02f826caafd5f7a8@xxxxxxxxxxxxxxxxxxxxxxxxx > Closes: https://lore.kernel.org/all/0000000000002db68f05ffb791bc@xxxxxxxxxx/ > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> Should have mentioned that this patch is for mm-unstable. > --- > include/linux/mm.h | 16 ++++++---------- > 1 file changed, 6 insertions(+), 10 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 9687b48dfb1b..e3b022a66343 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -668,6 +668,7 @@ static inline void vma_end_read(struct vm_area_struct *vma) > rcu_read_unlock(); > } > > +/* WARNING! Can only be used if mmap_lock is expected to be write-locked */ > static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) > { > mmap_assert_write_locked(vma->vm_mm); > @@ -707,22 +708,17 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma) > return true; > } > > -static inline void vma_assert_locked(struct vm_area_struct *vma) > +static inline void vma_assert_write_locked(struct vm_area_struct *vma) > { > int mm_lock_seq; > > - if (__is_vma_write_locked(vma, &mm_lock_seq)) > - return; > - > - lockdep_assert_held(&vma->vm_lock->lock); > - VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma); > + VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); > } > > -static inline void vma_assert_write_locked(struct vm_area_struct *vma) > +static inline void vma_assert_locked(struct vm_area_struct *vma) > { > - int mm_lock_seq; > - > - VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); > + if (!rwsem_is_locked(&vma->vm_lock->lock)) > + vma_assert_write_locked(vma); > } > > static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) > -- > 2.41.0.455.g037347b96a-goog >