From: Vlastimil Babka <vbabka@xxxxxxx> Subject: mm, slub: make slab_lock() disable irqs with PREEMPT_RT We need to disable irqs around slab_lock() (a bit spinlock) to make it irq-safe. The calls to slab_lock() are nested under spin_lock_irqsave() which doesn't disable irqs on PREEMPT_RT, so add explicit disabling with PREEMPT_RT. We also distinguish cmpxchg_double_slab() where we do the disabling explicitly and __cmpxchg_double_slab() for contexts with already disabled irqs. However these context are also typically spin_lock_irqsave() thus insufficient on PREEMPT_RT. Thus, change __cmpxchg_double_slab() to be same as cmpxchg_double_slab() on PREEMPT_RT. Link: https://lkml.kernel.org/r/20210805152000.12817-33-vbabka@xxxxxxx Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) --- a/mm/slub.c~mm-slub-make-slab_lock-disable-irqs-with-preempt_rt +++ a/mm/slub.c @@ -380,12 +380,12 @@ __slab_unlock(struct page *page, unsigne static __always_inline void slab_lock(struct page *page, unsigned long *flags) { - __slab_lock(page, flags, false); + __slab_lock(page, flags, IS_ENABLED(CONFIG_PREEMPT_RT)); } static __always_inline void slab_unlock(struct page *page, unsigned long *flags) { - __slab_unlock(page, flags, false); + __slab_unlock(page, flags, IS_ENABLED(CONFIG_PREEMPT_RT)); } static inline bool ___cmpxchg_double_slab(struct kmem_cache *s, struct page *page, @@ -429,14 +429,19 @@ static inline bool ___cmpxchg_double_sla return false; } -/* Interrupts must be disabled (for the fallback code to work right) */ +/* + * Interrupts must be disabled (for the fallback code to work right), typically + * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different + * so we disable interrupts explicitly here. + */ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) { return ___cmpxchg_double_slab(s, page, freelist_old, counters_old, - freelist_new, counters_new, n, false); + freelist_new, counters_new, n, + IS_ENABLED(CONFIG_PREEMPT_RT)); } static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, _