The patch titled Subject: kasan: add memzero int for unaligned size at DEBUG has been added to the -mm tree. Its filename is kasan-add-memzero-int-for-unaligned-size-at-debug.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/kasan-add-memzero-int-for-unaligned-size-at-debug.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/kasan-add-memzero-int-for-unaligned-size-at-debug.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Yee Lee <yee.lee@xxxxxxxxxxxx> Subject: kasan: add memzero int for unaligned size at DEBUG Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite the redzone of object with unaligned size. An additional memzero_explicit() path is added to replacing init by hwtag instruction for those unaligned size at SLUB debug mode. The penalty is acceptable since they are only enabled in debug mode, not production builds. A block of comment is added for explanation. Link: https://lkml.kernel.org/r/20210705103229.8505-3-yee.lee@xxxxxxxxxxxx Signed-off-by: Yee Lee <yee.lee@xxxxxxxxxxxx> Suggested-by: Andrey Konovalov <andreyknvl@xxxxxxxxx> Suggested-by: Marco Elver <elver@xxxxxxxxxx> Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Nicholas Tang <nicholas.tang@xxxxxxxxxxxx> Cc: Kuan-Ying Lee <Kuan-Ying.Lee@xxxxxxxxxxxx> Cc: Chinwen Chang <chinwen.chang@xxxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/kasan/kasan.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) --- a/mm/kasan/kasan.h~kasan-add-memzero-int-for-unaligned-size-at-debug +++ a/mm/kasan/kasan.h @@ -9,6 +9,7 @@ #ifdef CONFIG_KASAN_HW_TAGS #include <linux/static_key.h> +#include "../slab.h" DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace); extern bool kasan_flag_async __ro_after_init; @@ -387,6 +388,17 @@ static inline void kasan_unpoison(const if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; + /* + * Explicitly initialize the memory with the precise object size to + * avoid overwriting the SLAB redzone. This disables initialization in + * the arch code and may thus lead to performance penalty. The penalty + * is accepted since SLAB redzones aren't enabled in production builds. + */ + if (__slub_debug_enabled() && + init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + init = false; + memzero_explicit((void *)addr, size); + } size = round_up(size, KASAN_GRANULE_SIZE); hw_set_mem_tag_range((void *)addr, size, tag, init); _ Patches currently in -mm which might be from yee.lee@xxxxxxxxxxxx are kasan-add-memzero-int-for-unaligned-size-at-debug.patch