On Tue, Jun 07, 2022 at 05:20:10PM +0200, Sebastian Andrzej Siewior wrote: > The set_track() invocation in free_debug_processing() is invoked with > acquired slab_lock(). The lock disables interrupts on PREEMPT_RT and > this forbids to allocate memory which is done in stack_depot_save(). Thanks, nice catch. > Split set_track() into two parts: set_track_prepare() which allocate > memory and set_track_update() which only performs the assignment of the > trace data structure. Use set_track_prepare() before disabling > interrupts. > > Fixes: 5cf909c553e9e ("mm/slub: use stackdepot to save stack trace in objects") > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > --- > mm/slub.c | 39 +++++++++++++++++++++++++++++++++------ > 1 file changed, 33 insertions(+), 6 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index e5535020e0fdf..ee0445ccb288c 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -726,19 +726,42 @@ static struct track *get_track(struct kmem_cache *s, void *object, > return kasan_reset_tag(p + alloc); > } > > -static void noinline set_track(struct kmem_cache *s, void *object, > - enum track_item alloc, unsigned long addr) > +static noinline depot_stack_handle_t set_track_prepare(void) > { > - struct track *p = get_track(s, object, alloc); > - > + depot_stack_handle_t handle = 0; > #ifdef CONFIG_STACKDEPOT > unsigned long entries[TRACK_ADDRS_COUNT]; > unsigned int nr_entries; > > nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 3); > - p->handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT); > + handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT); > #endif > + return handle; > +} > > +static void set_track_update(struct kmem_cache *s, void *object, > + enum track_item alloc, unsigned long addr, > + depot_stack_handle_t handle) > +{ > + struct track *p = get_track(s, object, alloc); > + > +#ifdef CONFIG_STACKDEPOT > + p->handle = handle; > +#endif > + p->addr = addr; > + p->cpu = smp_processor_id(); > + p->pid = current->pid; > + p->when = jiffies; > +} > + > +static __always_inline void set_track(struct kmem_cache *s, void *object, > + enum track_item alloc, unsigned long addr) > +{ > + struct track *p = get_track(s, object, alloc); > + > +#ifdef CONFIG_STACKDEPOT > + p->handle = set_track_prepare(); > +#endif > p->addr = addr; > p->cpu = smp_processor_id(); > p->pid = current->pid; > @@ -1373,6 +1396,10 @@ static noinline int free_debug_processing( > int cnt = 0; > unsigned long flags, flags2; > int ret = 0; > + depot_stack_handle_t handle = 0; > + > + if (s->flags & SLAB_STORE_USER) > + handle = set_track_prepare(); > > spin_lock_irqsave(&n->list_lock, flags); > slab_lock(slab, &flags2); > @@ -1391,7 +1418,7 @@ static noinline int free_debug_processing( > } > > if (s->flags & SLAB_STORE_USER) > - set_track(s, object, TRACK_FREE, addr); > + set_track_update(s, object, TRACK_FREE, addr, handle); > trace(s, slab, object, 0); > /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ > init_object(s, object, SLUB_RED_INACTIVE); > -- > 2.36.1 > Looks good to me. Reviewed-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> -- Thanks, Hyeonggon