Re: [PATCH bpf-next v8 3/6] locking/local_lock: Introduce localtry_lock_t

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 14, 2025 at 4:11 AM Vlastimil Babka <vbabka@xxxxxxx> wrote:
>
> On 2/13/25 04:35, Alexei Starovoitov wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
> >
> > In !PREEMPT_RT local_lock_irqsave() disables interrupts to protect
> > critical section, but it doesn't prevent NMI, so the fully reentrant
> > code cannot use local_lock_irqsave() for exclusive access.
> >
> > Introduce localtry_lock_t and localtry_lock_irqsave() that
> > disables interrupts and sets acquired=1, so localtry_lock_irqsave()
> > from NMI attempting to acquire the same lock will return false.
> >
> > In PREEMPT_RT local_lock_irqsave() maps to preemptible spin_lock().
> > Map localtry_lock_irqsave() to preemptible spin_trylock().
> > When in hard IRQ or NMI return false right away, since
> > spin_trylock() is not safe due to PI issues.
> >
> > Note there is no need to use local_inc for acquired variable,
> > since it's a percpu variable with strict nesting scopes.
> >
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
> > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
>
> ...
>
> >
> > +/* localtry_lock_t variants */
> > +
> > +#define __localtry_lock_init(lock)                           \
> > +do {                                                         \
> > +     __local_lock_init(&(lock)->llock);                      \
> > +     WRITE_ONCE(&(lock)->acquired, 0);                       \
>
> This needs to be WRITE_ONCE((lock)->acquired, 0);

Thanks. Good catch.

> I'm adopting this implementation for my next slab sheaves RFC. But I'll want
> localtry_trylock() without _irqsave too, so I've added it locally. Posting
> below with the init fix and making the PREEMPT_RT comment clear. Feel free
> to fold everything, it would make it easier for me. Or just the fixes, if
> you don't want code you don't use yourself.

+1

> ----8<----
> From c4f47afa3d06367d8d54662d6c3a76d3ab6e349d Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka <vbabka@xxxxxxx>
> Date: Thu, 13 Feb 2025 19:38:31 +0100
> Subject: [PATCH] locking/local_lock: add localtry_trylock()
>
> Add a localtry_trylock() variant without _irqsave that will be used in
> slab sheaves implementation. Thanks to only disabling preemption and not
> irqs, it has a lower overhead. It's not necessary to disable irqs to
> avoid a deadlock if the irq context uses trylock and can handle
> failures.
>
> Also make the comment of localtry_trylock_irqsave() more clear, and fix a
> compilation failure in localtry_lock_init().
>
> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
> ---
>  include/linux/local_lock.h          | 13 +++++++++++-
>  include/linux/local_lock_internal.h | 31 +++++++++++++++++++++++++----
>  2 files changed, 39 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h
> index 05c254a5d7d3..1a0bc35839e3 100644
> --- a/include/linux/local_lock.h
> +++ b/include/linux/local_lock.h
> @@ -77,6 +77,16 @@
>  #define localtry_lock_irqsave(lock, flags)                             \
>         __localtry_lock_irqsave(lock, flags)
>
> +/**
> + * localtry_trylock - Try to acquire a per CPU local lock.
> + * @lock:      The lock variable
> + *
> + * The function can be used in any context such as NMI or HARDIRQ. Due to
> + * locking constrains it will _always_ fail to acquire the lock in NMI or
> + * HARDIRQ context on PREEMPT_RT.
> + */
> +#define localtry_trylock(lock)         __localtry_trylock(lock)
> +
>  /**
>   * localtry_trylock_irqsave - Try to acquire a per CPU local lock, save and disable
>   *                           interrupts if acquired
> @@ -84,7 +94,8 @@
>   * @flags:     Storage for interrupt flags
>   *
>   * The function can be used in any context such as NMI or HARDIRQ. Due to
> - * locking constrains it will _always_ fail to acquire the lock on PREEMPT_RT.
> + * locking constrains it will _always_ fail to acquire the lock in NMI or
> + * HARDIRQ context on PREEMPT_RT.

+1 as well.

>   */
>  #define localtry_trylock_irqsave(lock, flags)                          \
>         __localtry_trylock_irqsave(lock, flags)
> diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
> index c1369b300777..67bd13d142fa 100644
> --- a/include/linux/local_lock_internal.h
> +++ b/include/linux/local_lock_internal.h
> @@ -137,7 +137,7 @@ do {                                                                \
>  #define __localtry_lock_init(lock)                             \
>  do {                                                           \
>         __local_lock_init(&(lock)->llock);                      \
> -       WRITE_ONCE(&(lock)->acquired, 0);                       \
> +       WRITE_ONCE((lock)->acquired, 0);                        \
>  } while (0)
>
>  #define __localtry_lock(lock)                                  \
> @@ -167,6 +167,24 @@ do {                                                               \
>                 WRITE_ONCE(lt->acquired, 1);                    \
>         } while (0)
>
> +#define __localtry_trylock(lock)                               \
> +       ({                                                      \
> +               localtry_lock_t *lt;                            \
> +               bool _ret;                                      \
> +                                                               \
> +               preempt_disable();                              \
> +               lt = this_cpu_ptr(lock);                        \
> +               if (!READ_ONCE(lt->acquired)) {                 \
> +                       WRITE_ONCE(lt->acquired, 1);            \
> +                       local_trylock_acquire(&lt->llock);      \
> +                       _ret = true;                            \
> +               } else {                                        \
> +                       _ret = false;                           \
> +                       preempt_enable();                       \
> +               }                                               \
> +               _ret;                                           \
> +       })
> +
>  #define __localtry_trylock_irqsave(lock, flags)                        \
>         ({                                                      \
>                 localtry_lock_t *lt;                            \
> @@ -275,12 +293,10 @@ do {                                                              \
>  #define __localtry_unlock_irq(lock)                    __local_unlock(lock)
>  #define __localtry_unlock_irqrestore(lock, flags)      __local_unlock_irqrestore(lock, flags)
>
> -#define __localtry_trylock_irqsave(lock, flags)                        \
> +#define __localtry_trylock(lock)                               \
>         ({                                                      \
>                 int __locked;                                   \
>                                                                 \
> -               typecheck(unsigned long, flags);                \
> -               flags = 0;                                      \
>                 if (in_nmi() | in_hardirq()) {                  \
>                         __locked = 0;                           \
>                 } else {                                        \
> @@ -292,4 +308,11 @@ do {                                                               \
>                 __locked;                                       \
>         })
>
> +#define __localtry_trylock_irqsave(lock, flags)                        \
> +       ({                                                      \
> +               typecheck(unsigned long, flags);                \
> +               flags = 0;                                      \
> +               __localtry_trylock(lock);                       \
> +       })
> +

All makes sense to me.

Since respin is needed, I can fold the above fix/feature and
push it into a branch with stable sha-s that we both can
use as a base ?

Or you can push just this one patch into a stable branch and I can pull it
and apply the rest on top.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux