On Thu, Apr 25, 2024 at 2:09 PM Kent Overstreet <kent.overstreet@xxxxxxxxx> wrote: > > On Thu, Apr 25, 2024 at 01:55:23PM -0700, Kees Cook wrote: > > The system will immediate fill up stack and crash when both > > CONFIG_DEBUG_KMEMLEAK and CONFIG_MEM_ALLOC_PROFILING are enabled. > > Avoid allocation tagging of kmemleak caches, otherwise recursive > > allocation tracking occurs. > > > > Fixes: 279bb991b4d9 ("mm/slab: add allocation accounting into slab allocation and free paths") > > Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx> > > --- > > Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx> > > Cc: Kent Overstreet <kent.overstreet@xxxxxxxxx> > > Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > > Cc: Christoph Lameter <cl@xxxxxxxxx> > > Cc: Pekka Enberg <penberg@xxxxxxxxxx> > > Cc: David Rientjes <rientjes@xxxxxxxxxx> > > Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > > Cc: Vlastimil Babka <vbabka@xxxxxxx> > > Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> > > Cc: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> > > Cc: linux-mm@xxxxxxxxx > > --- > > mm/kmemleak.c | 4 ++-- > > mm/slub.c | 2 +- > > 2 files changed, 3 insertions(+), 3 deletions(-) > > > > diff --git a/mm/kmemleak.c b/mm/kmemleak.c > > index c55c2cbb6837..fdcf01f62202 100644 > > --- a/mm/kmemleak.c > > +++ b/mm/kmemleak.c > > @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) > > > > /* try the slab allocator first */ > > if (object_cache) { > > - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); > > + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp)); > > What do these get accounted to, or does this now pop a warning with > CONFIG_MEM_ALLOC_PROFILING_DEBUG? Thanks for the fix, Kees! I'll look into this recursion more closely to see if there is a better way to break it. As a stopgap measure seems ok to me. I also think it's unlikely that one would use both tracking mechanisms on the same system. > > > if (object) > > return object; > > } > > @@ -947,7 +947,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) > > untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer); > > > > if (scan_area_cache) > > - area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp)); > > + area = kmem_cache_alloc_noprof(scan_area_cache, gfp_kmemleak_mask(gfp)); > > > > raw_spin_lock_irqsave(&object->lock, flags); > > if (!area) { > > diff --git a/mm/slub.c b/mm/slub.c > > index a94a0507e19c..9ae032ed17ed 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -2016,7 +2016,7 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) > > if (!p) > > return NULL; > > > > - if (s->flags & SLAB_NO_OBJ_EXT) > > + if (s->flags & (SLAB_NO_OBJ_EXT | SLAB_NOLEAKTRACE)) > > return NULL; > > > > if (flags & __GFP_NO_OBJ_EXT) > > -- > > 2.34.1 > >