The patch titled Subject: mm, slub: remove runtime allocation order changes has been added to the -mm tree. Its filename is mm-slub-remove-runtime-allocation-order-changes.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-slub-remove-runtime-allocation-order-changes.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-slub-remove-runtime-allocation-order-changes.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Vlastimil Babka <vbabka@xxxxxxx> Subject: mm, slub: remove runtime allocation order changes SLUB allows runtime changing of page allocation order by writing into the /sys/kernel/slab/<cache>/order file. Jann has reported [1] that this interface allows the order to be set too small, leading to crashes. While it's possible to fix the immediate issue, closer inspection reveals potential races. Storing the new order calls calculate_sizes() which non-atomically updates a lot of kmem_cache fields while the cache is still in use. Unexpected behavior might occur even if the fields are set to the same value as they were. This could be fixed by splitting out the part of calculate_sizes() that depends on forced_order, so that we only update kmem_cache.oo field. This could still race with init_cache_random_seq(), shuffle_freelist(), allocate_slab(). Perhaps it's possible to audit and e.g. add some READ_ONCE/WRITE_ONCE accesses, it might be easier just to remove the runtime order changes, which is what this patch does. If there are valid usecases for per-cache order setting, we could e.g. extend the boot parameters to do that. [1] https://lore.kernel.org/r/CAG48ez31PP--h6_FzVyfJ4H86QYczAFPdxtJHUEEan+7VJETAQ@xxxxxxxxxxxxxx Link: http://lkml.kernel.org/r/20200610163135.17364-4-vbabka@xxxxxxx Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Acked-by: Christoph Lameter <cl@xxxxxxxxx> Reported-by: Jann Horn <jannh@xxxxxxxxxx> Reviewed-by: Kees Cook <keescook@xxxxxxxxxxxx> Acked-by: Roman Gushchin <guro@xxxxxx> Cc: Vijayanand Jitta <vjitta@xxxxxxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) --- a/mm/slub.c~mm-slub-remove-runtime-allocation-order-changes +++ a/mm/slub.c @@ -5111,28 +5111,11 @@ static ssize_t objs_per_slab_show(struct } SLAB_ATTR_RO(objs_per_slab); -static ssize_t order_store(struct kmem_cache *s, - const char *buf, size_t length) -{ - unsigned int order; - int err; - - err = kstrtouint(buf, 10, &order); - if (err) - return err; - - if (order > slub_max_order || order < slub_min_order) - return -EINVAL; - - calculate_sizes(s, order); - return length; -} - static ssize_t order_show(struct kmem_cache *s, char *buf) { return sprintf(buf, "%u\n", oo_order(s->oo)); } -SLAB_ATTR(order); +SLAB_ATTR_RO(order); static ssize_t min_partial_show(struct kmem_cache *s, char *buf) { _ Patches currently in -mm which might be from vbabka@xxxxxxx are mm-slub-extend-slub_debug-syntax-for-multiple-blocks.patch mm-slub-make-some-slub_debug-related-attributes-read-only.patch mm-slub-remove-runtime-allocation-order-changes.patch mm-slub-make-remaining-slub_debug-related-attributes-read-only.patch mm-slub-make-reclaim_account-attribute-read-only.patch mm-slub-introduce-static-key-for-slub_debug.patch mm-slub-introduce-kmem_cache_debug_flags.patch mm-slub-extend-checks-guarded-by-slub_debug-static-key.patch mm-slab-slub-move-and-improve-cache_from_obj.patch