On Thu, Feb 13, 2025 at 7:04 AM Vlastimil Babka <vbabka@xxxxxxx> wrote: > > On 2/13/25 04:35, Alexei Starovoitov wrote: > > From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > > > > In !PREEMPT_RT local_lock_irqsave() disables interrupts to protect > > critical section, but it doesn't prevent NMI, so the fully reentrant > > code cannot use local_lock_irqsave() for exclusive access. > > > > Introduce localtry_lock_t and localtry_lock_irqsave() that > > disables interrupts and sets acquired=1, so localtry_lock_irqsave() > > from NMI attempting to acquire the same lock will return false. > > > > In PREEMPT_RT local_lock_irqsave() maps to preemptible spin_lock(). > > Map localtry_lock_irqsave() to preemptible spin_trylock(). > > When in hard IRQ or NMI return false right away, since > > spin_trylock() is not safe due to PI issues. > > > > Note there is no need to use local_inc for acquired variable, > > since it's a percpu variable with strict nesting scopes. > > > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> > > --- > > include/linux/local_lock.h | 59 +++++++++++++ > > include/linux/local_lock_internal.h | 123 ++++++++++++++++++++++++++++ > > 2 files changed, 182 insertions(+) > > > > diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h > > index 091dc0b6bdfb..05c254a5d7d3 100644 > > --- a/include/linux/local_lock.h > > +++ b/include/linux/local_lock.h > > @@ -51,6 +51,65 @@ > > #define local_unlock_irqrestore(lock, flags) \ > > __local_unlock_irqrestore(lock, flags) > > > > +/** > > + * localtry_lock_init - Runtime initialize a lock instance > > + */ > > +#define localtry_lock_init(lock) __localtry_lock_init(lock) > > + > > +/** > > + * localtry_lock - Acquire a per CPU local lock > > + * @lock: The lock variable > > + */ > > +#define localtry_lock(lock) __localtry_lock(lock) > > + > > +/** > > + * localtry_lock_irq - Acquire a per CPU local lock and disable interrupts > > + * @lock: The lock variable > > + */ > > +#define localtry_lock_irq(lock) __localtry_lock_irq(lock) > > + > > +/** > > + * localtry_lock_irqsave - Acquire a per CPU local lock, save and disable > > + * interrupts > > + * @lock: The lock variable > > + * @flags: Storage for interrupt flags > > + */ > > +#define localtry_lock_irqsave(lock, flags) \ > > + __localtry_lock_irqsave(lock, flags) > > + > > +/** > > + * localtry_trylock_irqsave - Try to acquire a per CPU local lock, save and disable > > + * interrupts if acquired > > + * @lock: The lock variable > > + * @flags: Storage for interrupt flags > > + * > > + * The function can be used in any context such as NMI or HARDIRQ. Due to > > + * locking constrains it will _always_ fail to acquire the lock on PREEMPT_RT. > > The "always fail" applies only to the NMI and HARDIRQ contexts, right? It's > not entirely obvious so it sounds worse than it is. > > > + > > +#define __localtry_trylock_irqsave(lock, flags) \ > > + ({ \ > > + int __locked; \ > > + \ > > + typecheck(unsigned long, flags); \ > > + flags = 0; \ > > + if (in_nmi() | in_hardirq()) { \ > > + __locked = 0; \ > > Because of this, IIUC? Right. It's part of commit log: + In PREEMPT_RT local_lock_irqsave() maps to preemptible spin_lock(). + Map localtry_lock_irqsave() to preemptible spin_trylock(). + When in hard IRQ or NMI return false right away, since + spin_trylock() is not safe due to PI issues. Steven explained it in detail in some earlier thread. realtime is hard. bpf and realtime together are even harder. Things got much better over the years, but plenty of work ahead. I can go in detail, but offtopic for this thread.