On Fri, Oct 2, 2020 at 11:28 PM Marco Elver <elver@xxxxxxxxxx> wrote: > On Fri, 2 Oct 2020 at 21:32, Jann Horn <jannh@xxxxxxxxxx> wrote: > > > That's another check; we don't want to make this more expensive. > > > > Ah, right, I missed that this is the one piece of KFENCE that is > > actually really hot code until Dmitry pointed that out. > > > > But actually, can't you reduce how hot this is for SLUB by moving > > is_kfence_address() down into the freeing slowpath? At the moment you > > use it in slab_free_freelist_hook(), which is in the super-hot > > fastpath, but you should be able to at least move it down into > > __slab_free()... > > > > Actually, you already have hooked into __slab_free(), so can't you > > just get rid of the check in the slab_free_freelist_hook()? > > I missed this bit: the loop that follows wants the free pointer, so I > currently see how this might work. :-/ reverse call graph: __slab_free do_slab_free slab_free kmem_cache_free (frees a single non-kmalloc allocation) kmem_cache_free_bulk (frees multiple) kfree (frees a single kmalloc allocation) ___cache_free (frees a single allocation for KASAN) So the only path for which we can actually loop in __slab_free() is kmem_cache_free_bulk(); and you've already changed build_detached_freelist() (which is used by kmem_cache_free_bulk() to group objects from the same page) to consume KFENCE allocations before they can ever reach __slab_free(). So we know that if we've reached __slab_free(), then we are being called with either a single object (which may be a KFENCE object) or with a list of objects that all belong to the same page and don't contain any KFENCE allocations.