The patch titled Subject: mm/slub.c: init_on_free=1 should wipe freelist ptr for bulk allocations has been added to the -mm tree. Its filename is mm-slub-init_on_free=1-should-wipe-freelist-ptr-for-bulk-allocations.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-slub-init_on_free%3D1-should-wipe-freelist-ptr-for-bulk-allocations.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-slub-init_on_free%3D1-should-wipe-freelist-ptr-for-bulk-allocations.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Alexander Potapenko <glider@xxxxxxxxxx> Subject: mm/slub.c: init_on_free=1 should wipe freelist ptr for bulk allocations slab_alloc_node() already zeroed out the freelist pointer if init_on_free was on. Thibaut Sautereau noticed that the same needs to be done for kmem_cache_alloc_bulk(), which performs the allocations separately. kmem_cache_alloc_bulk() is currently used in two places in the kernel, so this change is unlikely to have a major performance impact. SLAB doesn't require a similar change, as auto-initialization makes the allocator store the freelist pointers off-slab. Link: http://lkml.kernel.org/r/20191007091605.30530-1-glider@xxxxxxxxxx Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options") Signed-off-by: Alexander Potapenko <glider@xxxxxxxxxx> Reported-by: Thibaut Sautereau <thibaut@xxxxxxxxxxxx> Reported-by: Kees Cook <keescook@xxxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Laura Abbott <labbott@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) --- a/mm/slub.c~mm-slub-init_on_free=1-should-wipe-freelist-ptr-for-bulk-allocations +++ a/mm/slub.c @@ -2672,6 +2672,17 @@ static void *__slab_alloc(struct kmem_ca } /* + * If the object has been wiped upon free, make sure it's fully initialized by + * zeroing out freelist pointer. + */ +static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, + void *obj) +{ + if (unlikely(slab_want_init_on_free(s)) && obj) + memset((void *)((char *)obj + s->offset), 0, sizeof(void *)); +} + +/* * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) * have the fastpath folded into their functions. So no function call * overhead for requests that can be satisfied on the fastpath. @@ -2759,12 +2770,8 @@ redo: prefetch_freepointer(s, next_object); stat(s, ALLOC_FASTPATH); } - /* - * If the object has been wiped upon free, make sure it's fully - * initialized by zeroing out freelist pointer. - */ - if (unlikely(slab_want_init_on_free(s)) && object) - memset(object + s->offset, 0, sizeof(void *)); + + maybe_wipe_obj_freeptr(s, object); if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); @@ -3178,10 +3185,13 @@ int kmem_cache_alloc_bulk(struct kmem_ca goto error; c = this_cpu_ptr(s->cpu_slab); + maybe_wipe_obj_freeptr(s, p[i]); + continue; /* goto for-loop */ } c->freelist = get_freepointer(s, object); p[i] = object; + maybe_wipe_obj_freeptr(s, p[i]); } c->tid = next_tid(c->tid); local_irq_enable(); _ Patches currently in -mm which might be from glider@xxxxxxxxxx are mm-slub-init_on_free=1-should-wipe-freelist-ptr-for-bulk-allocations.patch lib-test_meminit-add-a-kmem_cache_alloc_bulk-test.patch