On Tue, 2021-06-22 at 17:03 +0300, Andrey Konovalov wrote: > On Mon, Jun 21, 2021 at 6:45 PM <yee.lee@xxxxxxxxxxxx> wrote: > > > > From: Yee Lee <yee.lee@xxxxxxxxxxxx> > > > > This patch adds a memset to initialize object of unaligned size. > > Duing to the MTE granulrity, the integrated initialization using > > hwtag instruction will force clearing out bytes in granular size, > > which may cause undesired effect, such as overwriting to the > > redzone > > of SLUB debug. In this patch, for the unaligned object size, > > function > > uses memset to initailize context instead of the hwtag instruction. > > > > Signed-off-by: Yee Lee <yee.lee@xxxxxxxxxxxx> > > --- > > mm/kasan/kasan.h | 5 ++++- > > 1 file changed, 4 insertions(+), 1 deletion(-) > > > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > > index 8f450bc28045..d8faa64614b7 100644 > > --- a/mm/kasan/kasan.h > > +++ b/mm/kasan/kasan.h > > @@ -387,8 +387,11 @@ static inline void kasan_unpoison(const void > > *addr, size_t size, bool init) > > > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > > return; > > + if (init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > > + init = false; > > + memset((void *)addr, 0, size); > > + } > > With this implementation, we loose the benefit of setting tags and > initializing memory with the same instructions. > > Perhaps a better implementation would be to call > hw_set_mem_tag_range() with the size rounded down, and then > separately > deal with the leftover memory. Yes, this fully takes the advantage of hw instruction. However, the leftover memory needs one more hw_set_mem_tag_range() for protection as well. If the extra path is only executed as CONFIG_SLUB_DEBUG, the performance lost would be less concerned. > > > size = round_up(size, KASAN_GRANULE_SIZE); > > - > > hw_set_mem_tag_range((void *)addr, size, tag, init); > > } > > > > -- > > 2.18.0 > >