On 8/24/22 18:25, Sebastian Andrzej Siewior wrote: > On 2022-08-23 19:04:00 [+0200], Vlastimil Babka wrote: >> The PREEMPT_RT specific disabling of irqs in __cmpxchg_double_slab() >> (through slab_[un]lock()) is unnecessary as bit_spin_lock() disables >> preemption and that's sufficient on RT where interrupts are threaded. > > maybe something like > "… sufficient on PREEMPT_RT where no allocation/ free operation is > performed in hardirq context and so interrupt the current operation." > >> That means we no longer need the slab_[un]lock() wrappers, so delete >> them and rename the current __slab_[un]lock() to slab_[un]lock(). >> >> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> >> Acked-by: David Rientjes <rientjes@xxxxxxxxxx> > > Reviewed-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Thanks, incorporated your wording suggestions. >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -454,7 +454,7 @@ static __always_inline void __slab_lock(struct slab *slab) > … >> /* >> * Interrupts must be disabled (for the fallback code to work right), typically >> - * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different >> - * so we disable interrupts as part of slab_[un]lock(). >> + * by an _irqsave() lock variant. Except on PREEMPT_RT where these variants do >> + * not actually disable interrupts. On the other hand the migrate_disable() >> + * done by bit_spin_lock() is sufficient on PREEMPT_RT thanks to its threaded >> + * interrupts. > > " On PREEMPT_RT the > preempt_disable(), which is part of bit_spin_lock(), is sufficient > because the policy is not to allow any allocation/ free operation in > hardirq context. Therefore nothing can interrupt the operation." > > This also includes SMP function calls (IPI). > >> */ >> static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, >> void *freelist_old, unsigned long counters_old, > > Sebastian