On Thu, Oct 10, 2024 at 01:56:41PM -0700, Andrii Nakryiko wrote: > From: Suren Baghdasaryan <surenb@xxxxxxxxxx> > > Add helper functions to speculatively perform operations without > read-locking mmap_lock, expecting that mmap_lock will not be > write-locked and mm is not modified from under us. > > Suggested-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx> > Link: https://lore.kernel.org/bpf/20240912210222.186542-1-surenb@xxxxxxxxxx > --- > include/linux/mm_types.h | 3 ++ > include/linux/mmap_lock.h | 72 ++++++++++++++++++++++++++++++++------- > kernel/fork.c | 3 -- > 3 files changed, 63 insertions(+), 15 deletions(-) > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 6e3bdf8e38bc..5d8cdebd42bc 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -887,6 +887,9 @@ struct mm_struct { > * Roughly speaking, incrementing the sequence number is > * equivalent to releasing locks on VMAs; reading the sequence > * number can be part of taking a read lock on a VMA. > + * Incremented every time mmap_lock is write-locked/unlocked. > + * Initialized to 0, therefore odd values indicate mmap_lock > + * is write-locked and even values that it's released. > * > * Can be modified under write mmap_lock using RELEASE > * semantics. > diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h > index de9dc20b01ba..9d23635bc701 100644 > --- a/include/linux/mmap_lock.h > +++ b/include/linux/mmap_lock.h > @@ -71,39 +71,84 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm) > } > > #ifdef CONFIG_PER_VMA_LOCK > +static inline void init_mm_lock_seq(struct mm_struct *mm) > +{ > + mm->mm_lock_seq = 0; > +} > + > /* > - * Drop all currently-held per-VMA locks. > - * This is called from the mmap_lock implementation directly before releasing > - * a write-locked mmap_lock (or downgrading it to read-locked). > - * This should normally NOT be called manually from other places. > - * If you want to call this manually anyway, keep in mind that this will release > - * *all* VMA write locks, including ones from further up the stack. > + * Increment mm->mm_lock_seq when mmap_lock is write-locked (ACQUIRE semantics) > + * or write-unlocked (RELEASE semantics). > */ > -static inline void vma_end_write_all(struct mm_struct *mm) > +static inline void inc_mm_lock_seq(struct mm_struct *mm, bool acquire) > { > mmap_assert_write_locked(mm); > /* > * Nobody can concurrently modify mm->mm_lock_seq due to exclusive > * mmap_lock being held. > - * We need RELEASE semantics here to ensure that preceding stores into > - * the VMA take effect before we unlock it with this store. > - * Pairs with ACQUIRE semantics in vma_start_read(). > */ > - smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); > + > + if (acquire) { > + WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1); > + /* > + * For ACQUIRE semantics we should ensure no following stores are > + * reordered to appear before the mm->mm_lock_seq modification. > + */ > + smp_wmb(); Strictly speaking this isn't ACQUIRE, nor do we care about ACQUIRE here. This really is about subsequent stores, loads are irrelevant. > + } else { > + /* > + * We need RELEASE semantics here to ensure that preceding stores > + * into the VMA take effect before we unlock it with this store. > + * Pairs with ACQUIRE semantics in vma_start_read(). > + */ Again, not strictly true. We don't care about loads. Using RELEASE here is fine and probably cheaper on a few platforms, but we don't strictly need/care about RELEASE. > + smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); > + } > +} Also, it might be saner to stick closer to the seqcount naming of things and use two different functions for these two different things. /* straight up copy of do_raw_write_seqcount_begin() */ static inline void mm_write_seqlock_begin(struct mm_struct *mm) { kcsan_nestable_atomic_begin(); mm->mm_lock_seq++; smp_wmb(); } /* straigjt up copy of do_raw_write_seqcount_end() */ static inline void mm_write_seqcount_end(struct mm_struct *mm) { smp_wmb(); mm->mm_lock_seq++; kcsan_nestable_atomic_end(); } Or better yet, just use seqcount... > + > +static inline bool mmap_lock_speculation_start(struct mm_struct *mm, int *seq) > +{ > + /* Pairs with RELEASE semantics in inc_mm_lock_seq(). */ > + *seq = smp_load_acquire(&mm->mm_lock_seq); > + /* Allow speculation if mmap_lock is not write-locked */ > + return (*seq & 1) == 0; > +} > + > +static inline bool mmap_lock_speculation_end(struct mm_struct *mm, int seq) > +{ > + /* Pairs with ACQUIRE semantics in inc_mm_lock_seq(). */ > + smp_rmb(); > + return seq == READ_ONCE(mm->mm_lock_seq); > } Because there's nothing better than well known functions with a randomly different name and interface I suppose... Anyway, all the actual code proposed is not wrong. I'm just a bit annoyed its a random NIH of seqcount.