From: Abel Wu <wuyun.wu@xxxxxxxxxx> Subject: mm/slub: fix missing ALLOC_SLOWPATH stat when bulk alloc The ALLOC_SLOWPATH statistics is missing in bulk allocation now. Fix it by doing statistics in alloc slow path. Link: http://lkml.kernel.org/r/20200811022427.1363-1-wuyun.wu@xxxxxxxxxx Signed-off-by: Abel Wu <wuyun.wu@xxxxxxxxxx> Reviewed-by: Pekka Enberg <penberg@xxxxxxxxxx> Acked-by: David Rientjes <rientjes@xxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Hewenliang <hewenliang4@xxxxxxxxxx> Cc: Hu Shiyuan <hushiyuan@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/mm/slub.c~mm-slub-fix-missing-alloc_slowpath-stat-when-bulk-alloc +++ a/mm/slub.c @@ -2661,6 +2661,8 @@ static void *___slab_alloc(struct kmem_c void *freelist; struct page *page; + stat(s, ALLOC_SLOWPATH); + page = c->page; if (!page) { /* @@ -2850,7 +2852,6 @@ redo: page = c->page; if (unlikely(!object || !node_match(page, node))) { object = __slab_alloc(s, gfpflags, node, addr, c); - stat(s, ALLOC_SLOWPATH); } else { void *next_object = get_freepointer_safe(s, object); _