Re: [PATCH v9 11/17] mm: replace vm_lock and detached flag with a reference count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 15, 2025 at 07:00:37AM -0800, Suren Baghdasaryan wrote:
> On Wed, Jan 15, 2025 at 3:13 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > On Wed, Jan 15, 2025 at 11:48:41AM +0100, Peter Zijlstra wrote:
> > > On Sat, Jan 11, 2025 at 12:14:47PM -0800, Suren Baghdasaryan wrote:
> > >
> > > > > Replacing down_read_trylock() with the new routine loses an acquire
> > > > > fence. That alone is not a problem, but see below.
> > > >
> > > > Hmm. I think this acquire fence is actually necessary. We don't want
> > > > the later vm_lock_seq check to be reordered and happen before we take
> > > > the refcount. Otherwise this might happen:
> > > >
> > > > reader             writer
> > > > if (vm_lock_seq == mm_lock_seq) // check got reordered
> > > >         return false;
> > > >                        vm_refcnt += VMA_LOCK_OFFSET
> > > >                        vm_lock_seq == mm_lock_seq
> > > >                        vm_refcnt -= VMA_LOCK_OFFSET
> > > > if (!__refcount_inc_not_zero_limited())
> > > >         return false;
> > > >
> > > > Both reader's checks will pass and the reader would read-lock a vma
> > > > that was write-locked.
> > >
> > > Hmm, you're right. That acquire does matter here.
> >
> > Notably, it means refcount_t is entirely unsuitable for anything
> > SLAB_TYPESAFE_BY_RCU, since they all will need secondary validation
> > conditions after the refcount succeeds.
> 
> Thanks for reviewing, Peter!
> Yes, I'm changing the code to use atomic_t instead of refcount_t and
> it comes out quite nicely I think. I had to add two small helper
> functions:
> vm_refcount_inc() - similar to refcount_add_not_zero() but with an
> acquired fence.
> vm_refcnt_sub() - similar to refcount_sub_and_test(). I could use
> atomic_sub_and_test() but that would add unnecessary acquire fence in
> the pagefault path, so I'm using refcount_sub_and_test() logic
> instead.

Right.

> For SLAB_TYPESAFE_BY_RCU I think we are ok with the
> __vma_enter_locked()/__vma_exit_locked() transition in the
> vma_mark_detached() before freeing the vma and would not need
> secondary validation. In __vma_enter_locked(), vm_refcount gets
> VMA_LOCK_OFFSET set, which prevents readers from taking the refcount.
> In __vma_exit_locked() vm_refcnt transitions to 0, so again that
> prevents readers from taking the refcount. IOW, the readers won't get
> to the secondary validation and will fail early on
> __refcount_inc_not_zero_limited(). I think this transition correctly
> serves the purpose of waiting for current temporary readers to exit
> and preventing new readers from read-locking and using the vma.

Consider:

    CPU0				CPU1

    rcu_read_lock();
    vma = vma_lookup(mm, vaddr);

    ... cpu goes sleep for a *long time* ...

    					__vma_exit_locked();
					vma_area_free()
					..
					vma = vma_area_alloc();
					vma_mark_attached();

    ... comes back once vma is re-used ...

    vma_start_read()
      vm_refcount_inc(); // success!!

At which point we need to validate vma is for mm and covers vaddr, which
is what patch 15 does, no?



Also, I seem to have forgotten some braces back in 2008 :-)

---
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 10a971c2bde3..c1356b52f8ea 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -115,9 +115,10 @@ enum _slab_flag_bits {
  *   rcu_read_lock();
  *   obj = lockless_lookup(key);
  *   if (obj) {
- *     if (!try_get_ref(obj)) // might fail for free objects
+ *     if (!try_get_ref(obj)) { // might fail for free objects
  *       rcu_read_unlock();
  *       goto begin;
+ *     }
  *
  *     if (obj->key != key) { // not the object we expected
  *       put_ref(obj);




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux