On 3/8/22 12:41, Hyeonggon Yoo wrote: > In later patch SLAB will also pass requests larger than order-1 page > to page allocator. Move kmalloc_large_node() to slab_common.c. > > Fold kmalloc_large_node_hook() into kmalloc_large_node() as there is > no other caller. > > Move tracepoint in kmalloc_large_node(). > > Add flag fix code. This exist in kmalloc_large() but omitted in > kmalloc_large_node(). > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> > --- > include/linux/slab.h | 3 +++ > mm/slab_common.c | 26 ++++++++++++++++++++++++ > mm/slub.c | 47 ++++---------------------------------------- > 3 files changed, 33 insertions(+), 43 deletions(-) > <snip> > > @@ -4874,15 +4842,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, > struct kmem_cache *s; > void *ret; > > - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { > - ret = kmalloc_large_node(size, gfpflags, node); > - > - trace_kmalloc_node(caller, ret, > - size, PAGE_SIZE << get_order(size), > - gfpflags, node); Hmm this throws away the caller for tracing, so looks like an unintended functional change. > - > - return ret; > - } > + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) > + return kmalloc_large_node(size, gfpflags, node); > > s = kmalloc_slab(size, gfpflags); >