Re: [PATCH bpf-next v5 3/7] locking/local_lock: Introduce local_trylock_irqsave()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 14, 2025 at 6:18 PM Alexei Starovoitov
<alexei.starovoitov@xxxxxxxxx> wrote:
>
> From: Alexei Starovoitov <ast@xxxxxxxxxx>
>
> Similar to local_lock_irqsave() introduce local_trylock_irqsave().
> This is inspired by 'struct local_tryirq_lock' in:
> https://lore.kernel.org/all/20241112-slub-percpu-caches-v1-5-ddc0bdc27e05@xxxxxxx/
>
> Use spin_trylock in PREEMPT_RT when not in hard IRQ and not in NMI
> and fail instantly otherwise, since spin_trylock is not safe from IRQ
> due to PI issues.
>
> In !PREEMPT_RT use simple active flag to prevent IRQs or NMIs
> reentering locked region.
>
> Note there is no need to use local_inc for active flag.
> If IRQ handler grabs the same local_lock after READ_ONCE(lock->active)
> already completed it has to unlock it before returning.
> Similar with NMI handler. So there is a strict nesting of scopes.
> It's a per cpu lock. Multiple cpus do not access it in parallel.
>
> Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
> ---
>  include/linux/local_lock.h          |  9 ++++
>  include/linux/local_lock_internal.h | 76 ++++++++++++++++++++++++++---
>  2 files changed, 78 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h
> index 091dc0b6bdfb..84ee560c4f51 100644
> --- a/include/linux/local_lock.h
> +++ b/include/linux/local_lock.h
> @@ -30,6 +30,15 @@
>  #define local_lock_irqsave(lock, flags)                                \
>         __local_lock_irqsave(lock, flags)
>
> +/**
> + * local_trylock_irqsave - Try to acquire a per CPU local lock, save and disable
> + *                        interrupts. Always fails in RT when in_hardirq or NMI.
> + * @lock:      The lock variable
> + * @flags:     Storage for interrupt flags
> + */
> +#define local_trylock_irqsave(lock, flags)                     \
> +       __local_trylock_irqsave(lock, flags)
> +
>  /**
>   * local_unlock - Release a per CPU local lock
>   * @lock:      The lock variable
> diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
> index 8dd71fbbb6d2..93672127c73d 100644
> --- a/include/linux/local_lock_internal.h
> +++ b/include/linux/local_lock_internal.h
> @@ -9,6 +9,7 @@
>  #ifndef CONFIG_PREEMPT_RT
>
>  typedef struct {
> +       int active;
>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>         struct lockdep_map      dep_map;
>         struct task_struct      *owner;
> @@ -22,7 +23,7 @@ typedef struct {
>                 .wait_type_inner = LD_WAIT_CONFIG,      \
>                 .lock_type = LD_LOCK_PERCPU,            \
>         },                                              \
> -       .owner = NULL,
> +       .owner = NULL, .active = 0

Sebastian,

could you please review/ack this patch ?

I looked through all current users of local_lock and the extra active
flag will be in the noise in all cases.
So I don't see any runtime/memory concerns
while extra lockdep_assert to catch reentrance issues
in RT and !RT will certainly help long term.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux