From: Vlastimil Babka <vbabka@xxxxxxx> Subject: mm, slub: extract get_partial() from new_slab_objects() The later patches will need more fine grained control over individual actions in ___slab_alloc(), the only caller of new_slab_objects(), so this is a first preparatory step with no functional change. This adds a goto label that appears unnecessary at this point, but will be useful for later changes. Link: https://lkml.kernel.org/r/20210805152000.12817-8-vbabka@xxxxxxx Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Acked-by: Christoph Lameter <cl@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/mm/slub.c~mm-slub-extract-get_partial-from-new_slab_objects +++ a/mm/slub.c @@ -2599,17 +2599,12 @@ slab_out_of_memory(struct kmem_cache *s, static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags, int node, struct kmem_cache_cpu **pc) { - void *freelist; + void *freelist = NULL; struct kmem_cache_cpu *c = *pc; struct page *page; WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO)); - freelist = get_partial(s, flags, node, c); - - if (freelist) - return freelist; - page = new_slab(s, flags, node); if (page) { c = raw_cpu_ptr(s->cpu_slab); @@ -2773,6 +2768,10 @@ new_slab: goto redo; } + freelist = get_partial(s, gfpflags, node, c); + if (freelist) + goto check_new_page; + freelist = new_slab_objects(s, gfpflags, node, &c); if (unlikely(!freelist)) { @@ -2780,6 +2779,7 @@ new_slab: return NULL; } +check_new_page: page = c->page; if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags))) goto load_freelist; _