On Tue, May 15, 2018 at 3:13 PM, Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> wrote: > > Using variable to store untagged_object pointer, instead of tagging/untagging back and forth would make the > code easier to follow. > static bool inline shadow_ivalid(u8 tag, s8 shadow_byte) > { > if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > return shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE; > else > return tag != (u8)shadow_byte; > } > > > static bool __kasan_slab_free(struct kmem_cache *cache, void *object, > > ... > if (shadow_invalid(tag, shadow_byte)) { > kasan_report_invalid_free(object, ip); > return true; > } > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 7cd4a4e8c3be..f11d6059fc06 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -404,12 +404,9 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, > redzone_end = round_up((unsigned long)object + cache->object_size, > KASAN_SHADOW_SCALE_SIZE); > > -#ifdef CONFIG_KASAN_GENERIC > - kasan_unpoison_shadow(object, size); > -#else > tag = random_tag(); > - kasan_poison_shadow(object, redzone_start - (unsigned long)object, tag); > -#endif > + kasan_unpoison_shadow(set_tag(object, tag), size); > + > kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, > KASAN_KMALLOC_REDZONE); > kasan_kmalloc_large() should be left untouched. It works correctly as is in both cases. > ptr comes from page allocator already already tagged at this point. Will fix all in v2, thanks!