On Fri, 30 Oct 2020 at 03:49, Jann Horn <jannh@xxxxxxxxxx> wrote: > On Thu, Oct 29, 2020 at 2:17 PM Marco Elver <elver@xxxxxxxxxx> wrote: > > Inserts KFENCE hooks into the SLAB allocator. > [...] > > diff --git a/mm/slab.c b/mm/slab.c > [...] > > @@ -3416,6 +3427,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) > > static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, > > unsigned long caller) > > { > > + if (kfence_free(objp)) { > > + kmemleak_free_recursive(objp, cachep->flags); > > + return; > > + } > > This looks dodgy. Normally kmemleak is told that an object is being > freed *before* the object is actually released. I think that if this > races really badly, we'll make kmemleak stumble over this bit in > create_object(): > > kmemleak_stop("Cannot insert 0x%lx into the object search tree > (overlaps existing)\n", > ptr); Good catch. Although extremely unlikely, let's just avoid it by moving the freeing after. > > > + > > /* Put the object into the quarantine, don't touch it for now. */ > > if (kasan_slab_free(cachep, objp, _RET_IP_)) > > return;