On 4/14/22 10:57, Hyeonggon Yoo wrote: > In later patch SLAB will also pass requests larger than order-1 page > to page allocator. Move kmalloc_large_node() to slab_common.c. > > Fold kmalloc_large_node_hook() into kmalloc_large_node() as there is > no other caller. > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> > --- > include/linux/slab.h | 3 +++ > mm/slab_common.c | 22 ++++++++++++++++++++++ > mm/slub.c | 25 ------------------------- > 3 files changed, 25 insertions(+), 25 deletions(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 6f6e22959b39..97336acbebbf 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -486,6 +486,9 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g > > extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment > __alloc_size(1); > + > +extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) > + __assume_page_alignment __alloc_size(1); The usual :)