The patch titled Subject: slab-fix-oops-when-reading-proc-slab_allocators-v2 has been removed from the -mm tree. Its filename was slab-fix-oops-when-reading-proc-slab_allocators-v2.patch This patch was dropped because it was folded into slab-fix-oops-when-reading-proc-slab_allocators.patch ------------------------------------------------------ From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Subject: slab-fix-oops-when-reading-proc-slab_allocators-v2 v2: edit one more function, calculate_slab_order(), that wants to know how much space per object is spent for freelist management. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Reported-by: Dave Jones <davej@xxxxxxxxxx> Reported-by: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> Reviewed-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slab.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff -puN mm/slab.c~slab-fix-oops-when-reading-proc-slab_allocators-v2 mm/slab.c --- a/mm/slab.c~slab-fix-oops-when-reading-proc-slab_allocators-v2 +++ a/mm/slab.c @@ -2093,13 +2093,16 @@ static size_t calculate_slab_order(struc break; if (flags & CFLGS_OFF_SLAB) { + size_t freelist_size_per_obj = sizeof(freelist_idx_t); /* * Max number of objs-per-slab for caches which * use off-slab slabs. Needed to avoid a possible * looping condition in cache_grow(). */ + if (IS_ENABLED(CONFIG_DEBUG_SLAB_LEAK)) + freelist_size_per_obj += sizeof(char); offslab_limit = size; - offslab_limit /= sizeof(freelist_idx_t); + offslab_limit /= freelist_size_per_obj; if (num > offslab_limit) break; _ Patches currently in -mm which might be from iamjoonsoo.kim@xxxxxxx are slab-maintainer-update.patch slab-fix-oops-when-reading-proc-slab_allocators.patch dma-cma-fix-possible-memory-leak.patch mm-slabc-add-__init-to-init_lock_keys.patch slab-common-add-functions-for-kmem_cache_node-access.patch slub-use-new-node-functions.patch slub-use-new-node-functions-fix.patch slab-use-get_node-and-kmem_cache_node-functions.patch slab-use-get_node-and-kmem_cache_node-functions-fix.patch slab-use-get_node-and-kmem_cache_node-functions-fix-2.patch mm-slabh-wrap-the-whole-file-with-guarding-macro.patch mm-slub-mark-resiliency_test-as-init-text.patch mm-slub-slub_debug=n-use-the-same-alloc-free-hooks-as-for-slub_debug=y.patch vmalloc-use-rcu-list-iterator-to-reduce-vmap_area_lock-contention.patch memcg-cleanup-memcg_cache_params-refcnt-usage.patch memcg-destroy-kmem-caches-when-last-slab-is-freed.patch memcg-mark-caches-that-belong-to-offline-memcgs-as-dead.patch slub-dont-fail-kmem_cache_shrink-if-slab-placement-optimization-fails.patch slub-make-slab_free-non-preemptable.patch memcg-wait-for-kfrees-to-finish-before-destroying-cache.patch slub-make-dead-memcg-caches-discard-free-slabs-immediately.patch slab-do-not-keep-free-objects-slabs-on-dead-memcg-caches.patch dma-cma-separate-core-cma-management-codes-from-dma-apis.patch dma-cma-support-alignment-constraint-on-cma-region.patch dma-cma-support-arbitrary-bitmap-granularity.patch dma-cma-support-arbitrary-bitmap-granularity-fix.patch cma-generalize-cma-reserved-area-management-functionality.patch cma-generalize-cma-reserved-area-management-functionality-fix.patch ppc-kvm-cma-use-general-cma-reserved-area-management-framework.patch mm-cma-clean-up-cma-allocation-error-path.patch mm-cma-change-cma_declare_contiguous-to-obey-coding-convention.patch mm-cma-clean-up-log-message.patch mm-compactionc-isolate_freepages_block-small-tuneup.patch page-owners-correct-page-order-when-to-free-page.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html