On Thu, Oct 29, 2020 at 2:17 PM Marco Elver <elver@xxxxxxxxxx> wrote: > Inserts KFENCE hooks into the SLAB allocator. [...] > diff --git a/mm/slab.c b/mm/slab.c [...] > @@ -3416,6 +3427,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) > static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, > unsigned long caller) > { > + if (kfence_free(objp)) { > + kmemleak_free_recursive(objp, cachep->flags); > + return; > + } This looks dodgy. Normally kmemleak is told that an object is being freed *before* the object is actually released. I think that if this races really badly, we'll make kmemleak stumble over this bit in create_object(): kmemleak_stop("Cannot insert 0x%lx into the object search tree (overlaps existing)\n", ptr); > + > /* Put the object into the quarantine, don't touch it for now. */ > if (kasan_slab_free(cachep, objp, _RET_IP_)) > return;