On Tue, Sep 27, 2022 at 03:11:24AM +0800, Andrey Konovalov wrote: > On Tue, Sep 13, 2022 at 8:54 AM Feng Tang <feng.tang@xxxxxxxxx> wrote: > > > > Hi Feng, > > > kzalloc/kmalloc will round up the request size to a fixed size > > (mostly power of 2), so the allocated memory could be more than > > requested. Currently kzalloc family APIs will zero all the > > allocated memory. > > > > To detect out-of-bound usage of the extra allocated memory, only > > zero the requested part, so that sanity check could be added to > > the extra space later. > > I still don't like the idea of only zeroing the requested memory and > not the whole object. Considering potential info-leak vulnerabilities. > > Can we only do this when SLAB_DEBUG is enabled? Good point! will add slub_debug_orig_size(s) check. > > Performance wise, smaller zeroing length also brings shorter > > execution time, as shown from test data on various server/desktop > > platforms. > > > > For kzalloc users who will call ksize() later and utilize this > > extra space, please be aware that the space is not zeroed any > > more. > > CC Kees Thanks for adding Kees, who provided review from security point of review. > > > > Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx> > > --- > > mm/slab.c | 7 ++++--- > > mm/slab.h | 5 +++-- > > mm/slub.c | 10 +++++++--- [...] > > @@ -730,7 +730,8 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > > > > static inline void slab_post_alloc_hook(struct kmem_cache *s, > > struct obj_cgroup *objcg, gfp_t flags, > > - size_t size, void **p, bool init) > > + size_t size, void **p, bool init, > > + unsigned int orig_size) > > { > > size_t i; > > > > @@ -746,7 +747,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > > for (i = 0; i < size; i++) { > > p[i] = kasan_slab_alloc(s, p[i], flags, init); > > if (p[i] && init && !kasan_has_integrated_init()) > > - memset(p[i], 0, s->object_size); > > + memset(p[i], 0, orig_size); > > Note that when KASAN is enabled and has integrated init, it will > initialize the whole object, which leads to an inconsistency with this > change. Do you mean for kzalloc() only? or there is some kasan check newly added? I'm not familiar with kasan code, and during development, I usually enabled KASAN and KFENCE configs and did catch some bugs, while 0Day bot also reported some. And with latest v6 patchset, I haven't seen kasan/kfence failed cases. And for generic slub objects, when slub_debug is enabled, the object data area could be already modified like in init_object() if (s->flags & __OBJECT_POISON) { memset(p, POISON_FREE, s->object_size - 1); p[s->object_size - 1] = POISON_END; } slub-redzone check actually splitis it into 2 regions [0, orig_size-1], and [orig_size, object_size-1], and adds different sanity check to them. Anyway, I'll go check the latest linux-next tree. Thanks, Feng