On Sun, Jan 12, 2025 at 6:25 PM Wei Yang <richard.weiyang@xxxxxxxxx> wrote: > > On Mon, Jan 13, 2025 at 01:47:29AM +0000, Wei Yang wrote: > >On Sat, Jan 11, 2025 at 12:14:47PM -0800, Suren Baghdasaryan wrote: > >>On Sat, Jan 11, 2025 at 3:24 AM Mateusz Guzik <mjguzik@xxxxxxxxx> wrote: > >>> > >>> On Fri, Jan 10, 2025 at 08:25:58PM -0800, Suren Baghdasaryan wrote: > >>> > >>> So there were quite a few iterations of the patch and I have not been > >>> reading majority of the feedback, so it may be I missed something, > >>> apologies upfront. :) > >>> > > > >Hi, I am new to memory barriers. Hope not bothering. > > > >>> > /* > >>> > * Try to read-lock a vma. The function is allowed to occasionally yield false > >>> > * locked result to avoid performance overhead, in which case we fall back to > >>> > @@ -710,6 +742,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma) > >>> > */ > >>> > static inline bool vma_start_read(struct vm_area_struct *vma) > >>> > { > >>> > + int oldcnt; > >>> > + > >>> > /* > >>> > * Check before locking. A race might cause false locked result. > >>> > * We can use READ_ONCE() for the mm_lock_seq here, and don't need > >>> > @@ -720,13 +754,19 @@ static inline bool vma_start_read(struct vm_area_struct *vma) > >>> > if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) > >>> > return false; > >>> > > >>> > - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) > >>> > + /* > >>> > + * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail > >>> > + * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET. > >>> > + */ > >>> > + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, > >>> > + VMA_REF_LIMIT))) > >>> > return false; > >>> > > >>> > >>> Replacing down_read_trylock() with the new routine loses an acquire > >>> fence. That alone is not a problem, but see below. > >> > >>Hmm. I think this acquire fence is actually necessary. We don't want > >>the later vm_lock_seq check to be reordered and happen before we take > >>the refcount. Otherwise this might happen: > >> > >>reader writer > >>if (vm_lock_seq == mm_lock_seq) // check got reordered > >> return false; > >> vm_refcnt += VMA_LOCK_OFFSET > >> vm_lock_seq == mm_lock_seq > >> vm_refcnt -= VMA_LOCK_OFFSET > >>if (!__refcount_inc_not_zero_limited()) > >> return false; > >> > >>Both reader's checks will pass and the reader would read-lock a vma > >>that was write-locked. > >> > > > >Here what we plan to do is define __refcount_inc_not_zero_limited() with > >acquire fence, e.g. with atomic_try_cmpxchg_acquire(), right? > > > > BTW, usually we pair acquire with release. > > The __vma_start_write() provide release fence when locked, so for this part > we are ok, right? Yes, __vma_start_write() -> __vma_exit_locked() -> refcount_sub_and_test() and this function provides release memory ordering, see https://elixir.bootlin.com/linux/v6.12.6/source/include/linux/refcount.h#L289 > > > -- > Wei Yang > Help you, Help me