The patch titled Subject: mm, slub: validate slab from partial list or page allocator before making it cpu slab has been added to the -mm tree. Its filename is mm-slub-validate-slab-from-partial-list-or-page-allocator-before-making-it-cpu-slab.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-slub-validate-slab-from-partial-list-or-page-allocator-before-making-it-cpu-slab.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-slub-validate-slab-from-partial-list-or-page-allocator-before-making-it-cpu-slab.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Vlastimil Babka <vbabka@xxxxxxx> Subject: mm, slub: validate slab from partial list or page allocator before making it cpu slab When we obtain a new slab page from node partial list or page allocator, we assign it to kmem_cache_cpu, perform some checks, and if they fail, we undo the assignment. In order to allow doing the checks without irq disabled, restructure the code so that the checks are done first, and kmem_cache_cpu.page assignment only after they pass. Link: https://lkml.kernel.org/r/20210805152000.12817-17-vbabka@xxxxxxx Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) --- a/mm/slub.c~mm-slub-validate-slab-from-partial-list-or-page-allocator-before-making-it-cpu-slab +++ a/mm/slub.c @@ -2775,10 +2775,8 @@ new_objects: lockdep_assert_irqs_disabled(); freelist = get_partial(s, gfpflags, node, &page); - if (freelist) { - c->page = page; + if (freelist) goto check_new_page; - } local_irq_restore(flags); put_cpu_ptr(s->cpu_slab); @@ -2791,9 +2789,6 @@ new_objects: } local_irq_save(flags); - if (c->page) - flush_slab(s, c); - /* * No other reference to the page yet so we can * muck around with it freely without cmpxchg @@ -2802,14 +2797,12 @@ new_objects: page->freelist = NULL; stat(s, ALLOC_SLAB); - c->page = page; check_new_page: if (kmem_cache_debug(s)) { if (!alloc_debug_processing(s, page, freelist, addr)) { /* Slab failed checks. Next slab needed */ - c->page = NULL; local_irq_restore(flags); goto new_slab; } else { @@ -2828,10 +2821,18 @@ check_new_page: */ goto return_single; + if (unlikely(c->page)) + flush_slab(s, c); + c->page = page; + goto load_freelist; return_single: + if (unlikely(c->page)) + flush_slab(s, c); + c->page = page; + deactivate_slab(s, page, get_freepointer(s, freelist), c); local_irq_restore(flags); return freelist; _ Patches currently in -mm which might be from vbabka@xxxxxxx are mm-slub-fix-slub_debug-disablement-for-list-of-slabs.patch mm-slub-dont-call-flush_all-from-slab_debug_trace_open.patch mm-slub-allocate-private-object-map-for-debugfs-listings.patch mm-slub-allocate-private-object-map-for-validate_slab_cache.patch mm-slub-dont-disable-irq-for-debug_check_no_locks_freed.patch mm-slub-remove-redundant-unfreeze_partials-from-put_cpu_partial.patch mm-slub-unify-cmpxchg_double_slab-and-__cmpxchg_double_slab.patch mm-slub-extract-get_partial-from-new_slab_objects.patch mm-slub-dissolve-new_slab_objects-into-___slab_alloc.patch mm-slub-return-slab-page-from-get_partial-and-set-c-page-afterwards.patch mm-slub-restructure-new-page-checks-in-___slab_alloc.patch mm-slub-simplify-kmem_cache_cpu-and-tid-setup.patch mm-slub-move-disabling-enabling-irqs-to-___slab_alloc.patch mm-slub-do-initial-checks-in-___slab_alloc-with-irqs-enabled.patch mm-slub-move-disabling-irqs-closer-to-get_partial-in-___slab_alloc.patch mm-slub-restore-irqs-around-calling-new_slab.patch mm-slub-validate-slab-from-partial-list-or-page-allocator-before-making-it-cpu-slab.patch mm-slub-check-new-pages-with-restored-irqs.patch mm-slub-stop-disabling-irqs-around-get_partial.patch mm-slub-move-reset-of-c-page-and-freelist-out-of-deactivate_slab.patch mm-slub-make-locking-in-deactivate_slab-irq-safe.patch mm-slub-call-deactivate_slab-without-disabling-irqs.patch mm-slub-move-irq-control-into-unfreeze_partials.patch mm-slub-discard-slabs-in-unfreeze_partials-without-irqs-disabled.patch mm-slub-detach-whole-partial-list-at-once-in-unfreeze_partials.patch mm-slub-separate-detaching-of-partial-list-in-unfreeze_partials-from-unfreezing.patch mm-slub-only-disable-irq-with-spin_lock-in-__unfreeze_partials.patch mm-slub-dont-disable-irqs-in-slub_cpu_dead.patch mm-slab-make-flush_slab-possible-to-call-with-irqs-enabled.patch mm-slub-optionally-save-restore-irqs-in-slab_lock.patch mm-slub-make-slab_lock-disable-irqs-with-preempt_rt.patch mm-slub-protect-put_cpu_partial-with-disabled-irqs-instead-of-cmpxchg.patch mm-slub-use-migrate_disable-on-preempt_rt.patch mm-slub-convert-kmem_cpu_slab-protection-to-local_lock.patch