Re: [PATCH 11/15] stackdepot: use read/write lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 30, 2023 at 11:13 AM Marco Elver <elver@xxxxxxxxxx> wrote:
>
> > -static int new_pool_required = 1;
> > +static bool new_pool_required = true;
> > +/* Lock that protects the variables above. */
> > +static DEFINE_RWLOCK(pool_rwlock);
>
> Despite this being a rwlock, it'll introduce tons of (cache) contention
> for the common case (stack depot entry exists).
>
> If creating new stack depot entries is only common during "warm-up" and
> then becomes exceedingly rare, I think a percpu-rwsem (read-lock is a
> CPU-local access, but write-locking is expensive) may be preferable.

Good suggestion. I propose that we keep the rwlock for now, and I'll
check whether the performance is better with percpu-rwsem once I get
to implementing and testing the performance changes. I'll also check
whether percpu-rwsem makes sense for stack ring in tag-based KASAN
modes.

> > @@ -262,10 +258,8 @@ static void depot_keep_new_pool(void **prealloc)
> >       /*
> >        * If a new pool is already saved or the maximum number of
> >        * pools is reached, do not use the preallocated memory.
> > -      * READ_ONCE is only used to mark the variable as atomic,
> > -      * there are no concurrent writes.
> >        */
> > -     if (!READ_ONCE(new_pool_required))
> > +     if (!new_pool_required)
>
> In my comment for the other patch I already suggested this change. Maybe
> move it there.

Will do in v2.

>
> >               return;
> >
> >       /*
> > @@ -281,9 +275,8 @@ static void depot_keep_new_pool(void **prealloc)
> >        * At this point, either a new pool is kept or the maximum
> >        * number of pools is reached. In either case, take note that
> >        * keeping another pool is not required.
> > -      * smp_store_release pairs with smp_load_acquire in stack_depot_save.
> >        */
> > -     smp_store_release(&new_pool_required, 0);
> > +     new_pool_required = false;
> >  }
> >
> >  /* Updates refences to the current and the next stack depot pools. */
> > @@ -300,7 +293,7 @@ static bool depot_update_pools(void **prealloc)
> >
> >               /* Take note that we might need a new new_pool. */
> >               if (pools_num < DEPOT_MAX_POOLS)
> > -                     smp_store_release(&new_pool_required, 1);
> > +                     new_pool_required = true;
> >
> >               /* Try keeping the preallocated memory for new_pool. */
> >               goto out_keep_prealloc;
> > @@ -369,18 +362,13 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
> >  static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
> >  {
> >       union handle_parts parts = { .handle = handle };
> > -     /*
> > -      * READ_ONCE pairs with potential concurrent write in
> > -      * depot_init_pool.
> > -      */
> > -     int pools_num_cached = READ_ONCE(pools_num);
> >       void *pool;
> >       size_t offset = parts.offset << DEPOT_STACK_ALIGN;
> >       struct stack_record *stack;
>
> I'd add lockdep assertions to check that the lock is held appropriately
> when entering various helper functions that don't actually take the
> lock. Similarly for places that should not have the lock held you could
> assert the lock is not held.

Will do in v2.

Thanks!





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux