On Wed, 6 Dec 2023 at 14:02, Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote: > > On 2023/12/6 17:58, Vlastimil Babka wrote: > > On 12/5/23 14:27, Chengming Zhou wrote: > >> On 2023/12/5 03:34, Vlastimil Babka wrote: > >>> When freeing an object that was allocated from KFENCE, we do that in the > >>> slowpath __slab_free(), relying on the fact that KFENCE "slab" cannot be > >>> the cpu slab, so the fastpath has to fallback to the slowpath. > >>> > >>> This optimization doesn't help much though, because is_kfence_address() > >>> is checked earlier anyway during the free hook processing or detached > >>> freelist building. Thus we can simplify the code by making the > >>> slab_free_hook() free the KFENCE object immediately, similarly to KASAN > >>> quarantine. > >>> > >>> In slab_free_hook() we can place kfence_free() above init processing, as > >>> callers have been making sure to set init to false for KFENCE objects. > >>> This simplifies slab_free(). This places it also above kasan_slab_free() > >>> which is ok as that skips KFENCE objects anyway. > >>> > >>> While at it also determine the init value in slab_free_freelist_hook() > >>> outside of the loop. > >>> > >>> This change will also make introducing per cpu array caches easier. > >>> > >>> Tested-by: Marco Elver <elver@xxxxxxxxxx> > >>> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> > >>> --- > >>> mm/slub.c | 22 ++++++++++------------ > >>> 1 file changed, 10 insertions(+), 12 deletions(-) > >>> > >>> diff --git a/mm/slub.c b/mm/slub.c > >>> index ed2fa92e914c..e38c2b712f6c 100644 > >>> --- a/mm/slub.c > >>> +++ b/mm/slub.c > >>> @@ -2039,7 +2039,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, > >>> * production configuration these hooks all should produce no code at all. > >>> * > >>> * Returns true if freeing of the object can proceed, false if its reuse > >>> - * was delayed by KASAN quarantine. > >>> + * was delayed by KASAN quarantine, or it was returned to KFENCE. > >>> */ > >>> static __always_inline > >>> bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > >>> @@ -2057,6 +2057,9 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > >>> __kcsan_check_access(x, s->object_size, > >>> KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT); > >>> > >>> + if (kfence_free(kasan_reset_tag(x))) > >> > >> I'm wondering if "kasan_reset_tag()" is needed here? > > > > I think so, because AFAICS the is_kfence_address() check in kfence_free() > > could be a false negative otherwise. In fact now I even question some of the > > Ok. > > > other is_kfence_address() checks in mm/slub.c, mainly > > build_detached_freelist() which starts from pointers coming directly from > > slab users. Insight from KASAN/KFENCE folks appreciated :) > > > I know very little about KASAN/KFENCE, looking forward to their insight. :) > > Just saw a check in __kasan_slab_alloc(): > > if (is_kfence_address(object)) > return (void *)object; > > So thought it seems that a kfence object would be skipped by KASAN. The is_kfence_address() implementation tolerates tagged addresses, i.e. if it receives a tagged non-kfence address, it will never return true. The KASAN_HW_TAGS patches and KFENCE patches were in development concurrently, and at the time there was some conflict resolution that happened when both were merged. The is_kfence_address(kasan_reset_tag(..)) initially came from [1] but was squashed into 2b8305260fb. [1] https://lore.kernel.org/all/9dc196006921b191d25d10f6e611316db7da2efc.1611946152.git.andreyknvl@xxxxxxxxxx/ Andrey, do you recall what issue you encountered that needed kasan_reset_tag()?