On Fri, 2021-06-25 at 17:03 +0300, Andrey Konovalov wrote: > On Thu, Jun 24, 2021 at 2:26 PM <yee.lee@xxxxxxxxxxxx> wrote: > > > > From: Yee Lee <yee.lee@xxxxxxxxxxxx> > > > > Issue: when SLUB debug is on, hwtag kasan_unpoison() would > > overwrite > > the redzone of object with unaligned size. > > > > An additional memzero_explicit() path is added to replacing init by > > hwtag instruction for those unaligned size at SLUB debug mode. > > > > Signed-off-by: Yee Lee <yee.lee@xxxxxxxxxxxx> > > --- > > mm/kasan/kasan.h | 6 ++++++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > > index 8f450bc28045..d1054f35838f 100644 > > --- a/mm/kasan/kasan.h > > +++ b/mm/kasan/kasan.h > > @@ -387,6 +387,12 @@ static inline void kasan_unpoison(const void > > *addr, size_t size, bool init) > > > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > > return; > > +#if IS_ENABLED(CONFIG_SLUB_DEBUG) > > Is this an issue only with SLUB? SLAB also uses redzones. As I known, hw-tag kasan only works with SLUB. > > > + if (init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > > This needs a comment along the lines of: > > /* Explicitly initialize the memory with the precise object size to > avoid overwriting the SLAB redzone. This disables initialization in > the arch code and may thus lead to performance penalty. The penalty > is > accepted since SLAB redzones aren't enabled in production builds. */ Sure, will work on this. > > > + init = false; > > + memzero_explicit((void *)addr, size); > > + } > > +#endif > > size = round_up(size, KASAN_GRANULE_SIZE); > > > > hw_set_mem_tag_range((void *)addr, size, tag, init); > > -- > > 2.18.0 > >