On Mon, May 16, 2022 at 11:41:27PM +0200, Vlastimil Babka wrote: > On 5/16/22 21:10, Shakeel Butt wrote: > > On Mon, May 16, 2022 at 11:53 AM Vasily Averin <vvs@xxxxxxxxxx> wrote: > >> > >> Slab caches marked with SLAB_ACCOUNT force accounting for every > >> allocation from this cache even if __GFP_ACCOUNT flag is not passed. > >> Unfortunately, at the moment this flag is not visible in ftrace output, > >> and this makes it difficult to analyze the accounted allocations. > >> > >> This patch adds the __GFP_ACCOUNT flag for allocations from slab caches > >> marked with SLAB_ACCOUNT to the ftrace output. > >> > >> Signed-off-by: Vasily Averin <vvs@xxxxxxxxxx> > >> --- > >> mm/slab.c | 3 +++ > >> mm/slub.c | 3 +++ > >> 2 files changed, 6 insertions(+) > >> > >> diff --git a/mm/slab.c b/mm/slab.c > >> index 0edb474edef1..4c3da8dfcbdb 100644 > >> --- a/mm/slab.c > >> +++ b/mm/slab.c > >> @@ -3492,6 +3492,9 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, > > > > What about kmem_cache_alloc_node()? > > > >> { > >> void *ret = slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP_); > >> > >> + if (cachep->flags & SLAB_ACCOUNT) > > > > Should this 'if' be unlikely() or should we trace cachep->flags > > explicitly to avoid this branch altogether? > > Hm I think ideally the tracepoint accepts cachep instead of current > cachep->*size parameters and does the necessary extraction and > modification in its fast_assign. +1 for fast_assign Changing flags just for tracing looks a bit excessive.