On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@xxxxxxxxx> wrote: > > kmalloc redzone check for slub has been merged, and it's better to add > a kunit case for it, which is inspired by a real-world case as described > in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"): > > " > octeon-hcd will crash the kernel when SLOB is used. This usually happens > after the 18-byte control transfer when a device descriptor is read. > The DMA engine is always transferring full 32-bit words and if the > transfer is shorter, some random garbage appears after the buffer. > The problem is not visible with SLUB since it rounds up the allocations > to word boundary, and the extra bytes will go undetected. > " > > To avoid interrupting the normal functioning of kmalloc caches, a > kmem_cache mimicing kmalloc cache is created with similar and all > necessary flags to have kmalloc-redzone enabled, and kmalloc_trace() > is used to really test the orig_size and redzone setup. > > Suggested-by: Vlastimil Babka <vbabka@xxxxxxx> > Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx> > --- > Changelog: > > since v1: > * create a new cache mimicing kmalloc cache, reduce dependency > over global slub_debug setting (Vlastimil Babka) > > lib/slub_kunit.c | 23 +++++++++++++++++++++++ > mm/slab.h | 3 ++- > 2 files changed, 25 insertions(+), 1 deletion(-) > > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c > index a303adf8f11c..dbdd656624d0 100644 > --- a/lib/slub_kunit.c > +++ b/lib/slub_kunit.c > @@ -122,6 +122,28 @@ static void test_clobber_redzone_free(struct kunit *test) > kmem_cache_destroy(s); > } > > +static void test_kmalloc_redzone_access(struct kunit *test) > +{ > + struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_kmalloc", 32, 0, > + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE|DEFAULT_FLAGS, > + NULL); > + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); > + > + kasan_disable_current(); > + > + /* Suppress the -Warray-bounds warning */ > + OPTIMIZER_HIDE_VAR(p); > + p[18] = 0xab; > + p[19] = 0xab; > + > + kmem_cache_free(s, p); > + validate_slab_cache(s); > + KUNIT_EXPECT_EQ(test, 2, slab_errors); > + > + kasan_enable_current(); > + kmem_cache_destroy(s); > +} > + > static int test_init(struct kunit *test) > { > slab_errors = 0; > @@ -141,6 +163,7 @@ static struct kunit_case test_cases[] = { > #endif > > KUNIT_CASE(test_clobber_redzone_free), > + KUNIT_CASE(test_kmalloc_redzone_access), > {} > }; > > diff --git a/mm/slab.h b/mm/slab.h > index c71590f3a22b..b6cd98b16ba7 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -327,7 +327,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, > /* Legal flag mask for kmem_cache_create(), for various configurations */ > #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ > SLAB_CACHE_DMA32 | SLAB_PANIC | \ > - SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) > + SLAB_KMALLOC | SLAB_SKIP_KFENCE | \ > + SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS) Shouldn't this hunk be in the previous patch, otherwise that patch alone will fail? This will also make SLAB_SKIP_KFENCE generally available to be used for cache creation. This is a significant change, and before it wasn't possible. Perhaps add a brief note to the commit message (or have a separate patch). We were trying to avoid making this possible, as it might be abused - however, given it's required for tests like these, I suppose there's no way around it. Thanks, -- Marco