On Wed, Jan 8, 2025 at 3:52 AM Vlastimil Babka <vbabka@xxxxxxx> wrote: > > On 12/26/24 18:07, Suren Baghdasaryan wrote: > > rw_semaphore is a sizable structure of 40 bytes and consumes > > considerable space for each vm_area_struct. However vma_lock has > > two important specifics which can be used to replace rw_semaphore > > with a simpler structure: > > 1. Readers never wait. They try to take the vma_lock and fall back to > > mmap_lock if that fails. > > 2. Only one writer at a time will ever try to write-lock a vma_lock > > because writers first take mmap_lock in write mode. > > Because of these requirements, full rw_semaphore functionality is not > > needed and we can replace rw_semaphore and the vma->detached flag with > > a refcount (vm_refcnt). > > When vma is in detached state, vm_refcnt is 0 and only a call to > > vma_mark_attached() can take it out of this state. Note that unlike > > before, now we enforce both vma_mark_attached() and vma_mark_detached() > > to be done only after vma has been write-locked. vma_mark_attached() > > changes vm_refcnt to 1 to indicate that it has been attached to the vma > > tree. When a reader takes read lock, it increments vm_refcnt, unless the > > top usable bit of vm_refcnt (0x40000000) is set, indicating presence of > > a writer. When writer takes write lock, it both increments vm_refcnt and > > sets the top usable bit to indicate its presence. If there are readers, > > writer will wait using newly introduced mm->vma_writer_wait. Since all > > writers take mmap_lock in write mode first, there can be only one writer > > at a time. The last reader to release the lock will signal the writer > > to wake up. > > refcount might overflow if there are many competing readers, in which case > > read-locking will fail. Readers are expected to handle such failures. > > > > Suggested-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > > Suggested-by: Matthew Wilcox <willy@xxxxxxxxxxxxx> > > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > > > */ > > static inline bool vma_start_read(struct vm_area_struct *vma) > > { > > + int oldcnt; > > + > > /* > > * Check before locking. A race might cause false locked result. > > * We can use READ_ONCE() for the mm_lock_seq here, and don't need > > @@ -720,13 +745,20 @@ static inline bool vma_start_read(struct vm_area_struct *vma) > > if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) > > return false; > > > > - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) > > + > > + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_); > > I don't know much about lockdep, but I see that down_read() does > > rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_); > > down_read_trylock() does > > rwsem_acquire_read(&sem->dep_map, 0, 1, _RET_IP_); > > This is passing the down_read()-like variant but it behaves like a trylock, no? Yes, you are correct, this should behave like a trylock. I'll fix it. > > > + /* Limit at VMA_REF_LIMIT to leave one count for a writer */ > > It's mainly to not increase as much as VMA_LOCK_OFFSET bit could become > false positively set set by readers, right? Correct. > The "leave one count" sounds > like an implementation detail of VMA_REF_LIMIT and will change if Liam's > suggestion is proven feasible? Yes. I already tested Liam's suggestion and it seems to be working fine. This comment will be gone in the next revision. > > > + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, > > + VMA_REF_LIMIT))) { > > + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); > > return false; > > + } > > + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); > > > > /* > > - * Overflow might produce false locked result. > > + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result. > > * False unlocked result is impossible because we modify and check > > - * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq > > + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq > > * modification invalidates all existing locks. > > * > > * We must use ACQUIRE semantics for the mm_lock_seq so that if we are > > @@ -734,10 +766,12 @@ static inline bool vma_start_read(struct vm_area_struct *vma) > > * after it has been unlocked. > > * This pairs with RELEASE semantics in vma_end_write_all(). > > */ > > - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { > > - up_read(&vma->vm_lock.lock); > > + if (unlikely(oldcnt & VMA_LOCK_OFFSET || > > + vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { > > + vma_refcount_put(vma); > > return false; > > } > > + > > return true; > > } > > > > @@ -749,8 +783,17 @@ static inline bool vma_start_read(struct vm_area_struct *vma) > > */ > > static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) > > { > > + int oldcnt; > > + > > mmap_assert_locked(vma->vm_mm); > > - down_read_nested(&vma->vm_lock.lock, subclass); > > + rwsem_acquire_read(&vma->vmlock_dep_map, subclass, 0, _RET_IP_); > > Same as above? Ack. > > > + /* Limit at VMA_REF_LIMIT to leave one count for a writer */ > > Also Ack. > > > + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, > > + VMA_REF_LIMIT))) { > > + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); > > + return false; > > + } > > + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); > > return true; > > } > >