On Tue, Jan 5, 2021 at 7:28 PM Andrey Konovalov <andreyknvl@xxxxxxxxxx> wrote: > > Since the hardware tag-based KASAN mode might not have a redzone that > comes after an allocated object (when kasan.mode=prod is enabled), the > kasan_bitops_tags() test ends up corrupting the next object in memory. > > Change the test so it always accesses the redzone that lies within the > allocated object's boundaries. > > Signed-off-by: Andrey Konovalov <andreyknvl@xxxxxxxxxx> > Link: https://linux-review.googlesource.com/id/I67f51d1ee48f0a8d0fe2658c2a39e4879fe0832a > --- > lib/test_kasan.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/lib/test_kasan.c b/lib/test_kasan.c > index b67da7f6e17f..3ea52da52714 100644 > --- a/lib/test_kasan.c > +++ b/lib/test_kasan.c > @@ -771,17 +771,17 @@ static void kasan_bitops_tags(struct kunit *test) > > /* This test is specifically crafted for the tag-based mode. */ > if (IS_ENABLED(CONFIG_KASAN_GENERIC)) { > - kunit_info(test, "skipping, CONFIG_KASAN_SW_TAGS required"); > + kunit_info(test, "skipping, CONFIG_KASAN_SW/HW_TAGS required"); > return; > } > > - /* Allocation size will be rounded to up granule size, which is 16. */ > - bits = kzalloc(sizeof(*bits), GFP_KERNEL); > + /* kmalloc-64 cache will be used and the last 16 bytes will be the redzone. */ > + bits = kzalloc(48, GFP_KERNEL); I think it might make sense to call ksize() here to ensure we have these spare bytes.