On Tue, Jan 17, 2023 at 10:21:28AM -0800, Suren Baghdasaryan wrote: > static inline bool vma_read_trylock(struct vm_area_struct *vma) > { > int count, new; > > /* Check before locking. A race might cause false locked result. */ > if (READ_ONCE(vma->vm_lock->lock_seq) == > READ_ONCE(vma->vm_mm->mm_lock_seq)) > return false; > > count = atomic_read(&vma->vm_lock->count); > for (;;) { > /* > * Is VMA is write-locked? Overflow might produce false > locked result. > * False unlocked result is impossible because we modify and check > * vma->vm_lock_seq under vma->vm_lock protection and > mm->mm_lock_seq > * modification invalidates all existing locks. > */ > if (count < 0) > return false; > > new = count + 1; > /* If atomic_t overflows, fail to lock. */ > if (new < 0) > return false; > > /* > * Atomic RMW will provide implicit mb on success to pair > with smp_wmb in > * vma_write_lock, on failure we retry. > */ > new = atomic_cmpxchg(&vma->vm_lock->count, count, new); > if (new == count) > break; > count = new; > cpu_relax(); The cpu_relax() is exactly the wrong thing to do here. See this thread: https://lore.kernel.org/linux-fsdevel/20230113184447.1707316-1-mjguzik@xxxxxxxxx/