The patch titled Subject: slub: limit count of partial slabs scanned to gather statistics has been removed from the -mm tree. Its filename was slub-limit-count-of-partial-slabs-scanned-to-gather-statistics.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx> Subject: slub: limit count of partial slabs scanned to gather statistics To get exact count of free and used objects slub have to scan list of partial slabs. This may take at long time. Scanning holds spinlock and blocks allocations which move partial slabs to per-cpu lists and back. Example found in the wild: # cat /sys/kernel/slab/dentry/partial 14478538 N0=7329569 N1=7148969 # time cat /sys/kernel/slab/dentry/objects 286225471 N0=136967768 N1=149257703 real 0m1.722s user 0m0.001s sys 0m1.721s The same problem in slab was addressed in commit f728b0a5d72a ("mm, slab: faster active and free stats") by adding more kmem cache statistics. For slub same approach requires atomic op on fast path when object frees. Let's simply limit count of scanned slabs and print warning. Limit set in /sys/module/slub/parameters/max_partial_to_count. Default is 10000 which should be enough for most sane cases. Return linear approximation if list of partials is longer than limit. Nobody should notice difference. Link: http://lkml.kernel.org/r/158860845968.33385.4165926113074799048.stgit@buzz Signed-off-by: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) --- a/mm/slub.c~slub-limit-count-of-partial-slabs-scanned-to-gather-statistics +++ a/mm/slub.c @@ -2451,16 +2451,29 @@ static inline unsigned long node_nr_objs #endif /* CONFIG_SLUB_DEBUG */ #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) + +static unsigned long max_partial_to_count __read_mostly = 10000; +module_param(max_partial_to_count, ulong, 0644); + static unsigned long count_partial(struct kmem_cache_node *n, int (*get_count)(struct page *)) { + unsigned long counted = 0; unsigned long flags; unsigned long x = 0; struct page *page; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) + list_for_each_entry(page, &n->partial, slab_list) { x += get_count(page); + + if (++counted > max_partial_to_count) { + pr_warn_once("SLUB: too much partial slabs to count all objects, increase max_partial_to_count.\n"); + /* Approximate total count of objects */ + x = mult_frac(x, n->nr_partial, counted); + break; + } + } spin_unlock_irqrestore(&n->list_lock, flags); return x; } _ Patches currently in -mm which might be from khlebnikov@xxxxxxxxxxxxxx are kernel-watchdog-flush-all-printk-nmi-buffers-when-hardlockup-detected.patch