The patch titled Subject: kasan: add memzero init for unaligned size under SLUB debug has been removed from the -mm tree. Its filename was kasan-add-memzero-init-for-unaligned-size-under-slub-debug.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Yee Lee <yee.lee@xxxxxxxxxxxx> Subject: kasan: add memzero init for unaligned size under SLUB debug Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite the redzone of object with unaligned size. An additional memzero_explicit() path is added to replacing init by hwtag instruction for those unaligned size at SLUB debug mode. Link: https://lkml.kernel.org/r/20210624112624.31215-2-yee.lee@xxxxxxxxxxxx Signed-off-by: Yee Lee <yee.lee@xxxxxxxxxxxx> Cc: Andrey Konovalov <andreyknvl@xxxxxxxxx> Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Matthias Brugger <matthias.bgg@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/kasan/kasan.h | 6 ++++++ 1 file changed, 6 insertions(+) --- a/mm/kasan/kasan.h~kasan-add-memzero-init-for-unaligned-size-under-slub-debug +++ a/mm/kasan/kasan.h @@ -387,6 +387,12 @@ static inline void kasan_unpoison(const if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; +#if IS_ENABLED(CONFIG_SLUB_DEBUG) + if (init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + init = false; + memzero_explicit((void *)addr, size); + } +#endif size = round_up(size, KASAN_GRANULE_SIZE); hw_set_mem_tag_range((void *)addr, size, tag, init); _ Patches currently in -mm which might be from yee.lee@xxxxxxxxxxxx are