+ kasan-add-memzero-init-for-unaligned-size-under-slub-debug.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: kasan: add memzero init for unaligned size under SLUB debug
has been added to the -mm tree.  Its filename is
     kasan-add-memzero-init-for-unaligned-size-under-slub-debug.patch

This patch should soon appear at
    https://ozlabs.org/~akpm/mmots/broken-out/kasan-add-memzero-init-for-unaligned-size-under-slub-debug.patch
and later at
    https://ozlabs.org/~akpm/mmotm/broken-out/kasan-add-memzero-init-for-unaligned-size-under-slub-debug.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Yee Lee <yee.lee@xxxxxxxxxxxx>
Subject: kasan: add memzero init for unaligned size under SLUB debug

Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite the
redzone of object with unaligned size.

An additional memzero_explicit() path is added to replacing init by hwtag
instruction for those unaligned size at SLUB debug mode.

Link: https://lkml.kernel.org/r/20210624112624.31215-2-yee.lee@xxxxxxxxxxxx
Signed-off-by: Yee Lee <yee.lee@xxxxxxxxxxxx>
Cc: Andrey Konovalov <andreyknvl@xxxxxxxxx>
Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx>
Cc: Alexander Potapenko <glider@xxxxxxxxxx>
Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx>
Cc: Matthias Brugger <matthias.bgg@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/kasan/kasan.h |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/mm/kasan/kasan.h~kasan-add-memzero-init-for-unaligned-size-under-slub-debug
+++ a/mm/kasan/kasan.h
@@ -387,6 +387,12 @@ static inline void kasan_unpoison(const
 
 	if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
 		return;
+#if IS_ENABLED(CONFIG_SLUB_DEBUG)
+	if (init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
+		init = false;
+		memzero_explicit((void *)addr, size);
+	}
+#endif
 	size = round_up(size, KASAN_GRANULE_SIZE);
 
 	hw_set_mem_tag_range((void *)addr, size, tag, init);
_

Patches currently in -mm which might be from yee.lee@xxxxxxxxxxxx are

kasan-add-memzero-init-for-unaligned-size-under-slub-debug.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux