On Sun, Sep 11, 2022 at 07:11:18AM +0800, Andrey Konovalov wrote: > On Wed, Sep 7, 2022 at 9:10 AM Feng Tang <feng.tang@xxxxxxxxx> wrote: > > > > kzalloc/kmalloc will round up the request size to a fixed size > > (mostly power of 2), so the allocated memory could be more than > > requested. Currently kzalloc family APIs will zero all the > > allocated memory. > > > > To detect out-of-bound usage of the extra allocated memory, only > > zero the requested part, so that sanity check could be added to > > the extra space later. > > > > For kzalloc users who will call ksize() later and utilize this > > extra space, please be aware that the space is not zeroed any > > more. > > > > Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx> > > --- > > mm/slab.c | 6 +++--- > > mm/slab.h | 9 +++++++-- > > mm/slub.c | 6 +++--- > > 3 files changed, 13 insertions(+), 8 deletions(-) > > > > diff --git a/mm/slab.c b/mm/slab.c > > index a5486ff8362a..73ecaa7066e1 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -3253,7 +3253,7 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, > > init = slab_want_init_on_alloc(flags, cachep); > > > > out: > > - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); > > + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, 0); > > return objp; > > } > > > > @@ -3506,13 +3506,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > > * Done outside of the IRQ disabled section. > > */ > > slab_post_alloc_hook(s, objcg, flags, size, p, > > - slab_want_init_on_alloc(flags, s)); > > + slab_want_init_on_alloc(flags, s), 0); > > /* FIXME: Trace call missing. Christoph would like a bulk variant */ > > return size; > > error: > > local_irq_enable(); > > cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); > > - slab_post_alloc_hook(s, objcg, flags, i, p, false); > > + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); > > kmem_cache_free_bulk(s, i, p); > > return 0; > > } > > diff --git a/mm/slab.h b/mm/slab.h > > index d0ef9dd44b71..20f9e2a9814f 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -730,12 +730,17 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > > > > static inline void slab_post_alloc_hook(struct kmem_cache *s, > > struct obj_cgroup *objcg, gfp_t flags, > > - size_t size, void **p, bool init) > > + size_t size, void **p, bool init, > > + unsigned int orig_size) > > { > > size_t i; > > > > flags &= gfp_allowed_mask; > > > > + /* If original request size(kmalloc) is not set, use object_size */ > > + if (!orig_size) > > + orig_size = s->object_size; > > + > > /* > > * As memory initialization might be integrated into KASAN, > > * kasan_slab_alloc and initialization memset must be > > @@ -746,7 +751,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > > for (i = 0; i < size; i++) { > > p[i] = kasan_slab_alloc(s, p[i], flags, init); > > if (p[i] && init && !kasan_has_integrated_init()) > > - memset(p[i], 0, s->object_size); > > + memset(p[i], 0, orig_size); > > Arguably, with slab_want_init_on_alloc(), all allocated memory should > be zeroed to prevent possibility of info-leaks, even unused paddings. > Perhaps, Alexander can give his opinion here. Initially, I thought about only zero the requested part(orig_size) when slub_debug is enabled for that slab. But from the profiling, zeroing 4096+1 bytes and zeroing 8192 bytes, has obvious difference in execution time (about 10 us vs 18 us). Semantics wise, requesting 'A' bytes being zeroed and expecting 'A+B' zeroed bytes is not very valid, IMHO Also this 2/4 patch is also a preparation for 4/4 of redzone extension, without it, the redzone initialization will be overridden by the zeroing. Thanks, Feng > Thanks! >