The patch titled Subject: mm, slub: allocate private object map for validate_slab_cache() has been removed from the -mm tree. Its filename was mm-slub-allocate-private-object-map-for-validate_slab_cache.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Vlastimil Babka <vbabka@xxxxxxx> Subject: mm, slub: allocate private object map for validate_slab_cache() validate_slab_cache() is called either to handle a sysfs write, or from a self-test context. In both situations it's straightforward to preallocate a private object bitmap instead of grabbing the shared static one meant for critical sections, so let's do that. Link: https://lkml.kernel.org/r/20210805152000.12817-4-vbabka@xxxxxxx Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Acked-by: Christoph Lameter <cl@xxxxxxxxx> Acked-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) --- a/mm/slub.c~mm-slub-allocate-private-object-map-for-validate_slab_cache +++ a/mm/slub.c @@ -4679,11 +4679,11 @@ static int count_total(struct page *page #endif #ifdef CONFIG_SLUB_DEBUG -static void validate_slab(struct kmem_cache *s, struct page *page) +static void validate_slab(struct kmem_cache *s, struct page *page, + unsigned long *obj_map) { void *p; void *addr = page_address(page); - unsigned long *map; slab_lock(page); @@ -4691,21 +4691,20 @@ static void validate_slab(struct kmem_ca goto unlock; /* Now we know that a valid freelist exists */ - map = get_map(s, page); + __fill_map(obj_map, s, page); for_each_object(p, s, addr, page->objects) { - u8 val = test_bit(__obj_to_index(s, addr, p), map) ? + u8 val = test_bit(__obj_to_index(s, addr, p), obj_map) ? SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; if (!check_object(s, page, p, val)) break; } - put_map(map); unlock: slab_unlock(page); } static int validate_slab_node(struct kmem_cache *s, - struct kmem_cache_node *n) + struct kmem_cache_node *n, unsigned long *obj_map) { unsigned long count = 0; struct page *page; @@ -4714,7 +4713,7 @@ static int validate_slab_node(struct kme spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) { - validate_slab(s, page); + validate_slab(s, page, obj_map); count++; } if (count != n->nr_partial) { @@ -4727,7 +4726,7 @@ static int validate_slab_node(struct kme goto out; list_for_each_entry(page, &n->full, slab_list) { - validate_slab(s, page); + validate_slab(s, page, obj_map); count++; } if (count != atomic_long_read(&n->nr_slabs)) { @@ -4746,10 +4745,17 @@ long validate_slab_cache(struct kmem_cac int node; unsigned long count = 0; struct kmem_cache_node *n; + unsigned long *obj_map; + + obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL); + if (!obj_map) + return -ENOMEM; flush_all(s); for_each_kmem_cache_node(s, node, n) - count += validate_slab_node(s, n); + count += validate_slab_node(s, n, obj_map); + + bitmap_free(obj_map); return count; } _ Patches currently in -mm which might be from vbabka@xxxxxxx are mm-slub-dont-disable-irq-for-debug_check_no_locks_freed.patch mm-slub-remove-redundant-unfreeze_partials-from-put_cpu_partial.patch mm-slub-extract-get_partial-from-new_slab_objects.patch mm-slub-dissolve-new_slab_objects-into-___slab_alloc.patch mm-slub-return-slab-page-from-get_partial-and-set-c-page-afterwards.patch mm-slub-restructure-new-page-checks-in-___slab_alloc.patch mm-slub-simplify-kmem_cache_cpu-and-tid-setup.patch mm-slub-move-disabling-enabling-irqs-to-___slab_alloc.patch mm-slub-do-initial-checks-in-___slab_alloc-with-irqs-enabled.patch mm-slub-move-disabling-irqs-closer-to-get_partial-in-___slab_alloc.patch mm-slub-restore-irqs-around-calling-new_slab.patch mm-slub-validate-slab-from-partial-list-or-page-allocator-before-making-it-cpu-slab.patch mm-slub-check-new-pages-with-restored-irqs.patch mm-slub-stop-disabling-irqs-around-get_partial.patch mm-slub-move-reset-of-c-page-and-freelist-out-of-deactivate_slab.patch mm-slub-make-locking-in-deactivate_slab-irq-safe.patch mm-slub-call-deactivate_slab-without-disabling-irqs.patch mm-slub-move-irq-control-into-unfreeze_partials.patch mm-slub-discard-slabs-in-unfreeze_partials-without-irqs-disabled.patch mm-slub-detach-whole-partial-list-at-once-in-unfreeze_partials.patch mm-slub-separate-detaching-of-partial-list-in-unfreeze_partials-from-unfreezing.patch mm-slub-only-disable-irq-with-spin_lock-in-__unfreeze_partials.patch mm-slub-dont-disable-irqs-in-slub_cpu_dead.patch mm-slub-optionally-save-restore-irqs-in-slab_lock.patch mm-slub-make-slab_lock-disable-irqs-with-preempt_rt.patch mm-slub-protect-put_cpu_partial-with-disabled-irqs-instead-of-cmpxchg.patch mm-slub-use-migrate_disable-on-preempt_rt.patch mm-slub-convert-kmem_cpu_slab-protection-to-local_lock.patch mm-slub-remove-conditional-locking-parameter-from-flush_slab.patch