On Tue, Apr 26, 2022 at 08:01:27PM +0200, Vlastimil Babka wrote: > On 4/14/22 10:57, Hyeonggon Yoo wrote: > > Implement only __kmem_cache_alloc_node() in slab allocators and make > > kmem_cache_alloc{,node,lru} wrapper of it. > > > > Now that kmem_cache_alloc{,node,lru} is inline function, we should > > use _THIS_IP_ instead of _RET_IP_ for consistency. > > Hm yeah looks like this actually fixes some damage of obscured actual > __RET_IP_ by the recent addition and wrapping of __kmem_cache_alloc_lru(). > > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> > > Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> > > Some nits: > > > --- > > include/linux/slab.h | 52 ++++++++++++++++++++++++++++++++----- > > mm/slab.c | 61 +++++--------------------------------------- > > mm/slob.c | 27 ++++++-------------- > > mm/slub.c | 35 +++++-------------------- > > 4 files changed, 67 insertions(+), 108 deletions(-) > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index 143830f57a7f..1b5bdcb0fd31 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -429,9 +429,52 @@ void *__kmalloc(size_t size, gfp_t flags) > > return __kmalloc_node(size, flags, NUMA_NO_NODE); > > } > > > > -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; > > -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, > > - gfp_t gfpflags) __assume_slab_alignment __malloc; > > + > > +void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, > > + gfp_t gfpflags, int node, unsigned long caller __maybe_unused) > > + __assume_slab_alignment __malloc; > > I don't think caller needs to be __maybe_unused in the declaration nor any > of the implementations of __kmem_cache_alloc_node(), all actually pass it on? My intention was to give hints to compilers when CONFIG_TRACING=n. I'll check if the compiler just optimizes them without __maybe_unused. Thanks!