The patch titled Subject: mm: slub: make object_map_lock a raw_spinlock_t has been added to the -mm tree. Its filename is mm-slub-make-object_map_lock-a-raw_spinlock_t.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-slub-make-object_map_lock-a-raw_spinlock_t.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-slub-make-object_map_lock-a-raw_spinlock_t.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Subject: mm: slub: make object_map_lock a raw_spinlock_t The variable object_map is protected by object_map_lock. The lock is always acquired in debug code and within already atomic context Make object_map_lock a raw_spinlock_t. Link: https://lkml.kernel.org/r/20210805152000.12817-31-vbabka@xxxxxxx Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/mm/slub.c~mm-slub-make-object_map_lock-a-raw_spinlock_t +++ a/mm/slub.c @@ -438,7 +438,7 @@ static inline bool cmpxchg_double_slab(s #ifdef CONFIG_SLUB_DEBUG static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; -static DEFINE_SPINLOCK(object_map_lock); +static DEFINE_RAW_SPINLOCK(object_map_lock); static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, struct page *page) @@ -483,7 +483,7 @@ static unsigned long *get_map(struct kme { VM_BUG_ON(!irqs_disabled()); - spin_lock(&object_map_lock); + raw_spin_lock(&object_map_lock); __fill_map(object_map, s, page); @@ -493,7 +493,7 @@ static unsigned long *get_map(struct kme static void put_map(unsigned long *map) __releases(&object_map_lock) { VM_BUG_ON(map != object_map); - spin_unlock(&object_map_lock); + raw_spin_unlock(&object_map_lock); } static inline unsigned int size_from_object(struct kmem_cache *s) _ Patches currently in -mm which might be from bigeasy@xxxxxxxxxxxxx are mm-slub-move-flush_cpu_slab-invocations-__free_slab-invocations-out-of-irq-context.patch mm-slub-make-object_map_lock-a-raw_spinlock_t.patch