+ mm-memcg-print-out-cgroup-ino-in-the-memcg-tracepoints.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: memcg: print out cgroup ino in the memcg tracepoints
has been added to the -mm mm-unstable branch.  Its filename is
     mm-memcg-print-out-cgroup-ino-in-the-memcg-tracepoints.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-memcg-print-out-cgroup-ino-in-the-memcg-tracepoints.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Dmitry Rokosov <ddrokosov@xxxxxxxxxxxxxxxxx>
Subject: mm: memcg: print out cgroup ino in the memcg tracepoints
Date: Thu, 23 Nov 2023 22:39:36 +0300

Patch series "mm: memcg: improve vmscan tracepoints", v3.

The motivation behind this commit is to enhance the traceability and
understanding of memcg events.  By integrating the function cgroup_ino()
into the existing memcg tracepoints, this patch series introduces a new
tracepoint template for the begin() and end() events.  It utilizes a new
entry field ino to store the cgroup ino, enabling developers to easily
identify the cgroup associated with a specific memcg tracepoint event. 

Additionally, this patch series introduces new shrink_memcg tracepoints to
facilitate non-direct memcg reclaim tracing and debugging.  


This patch (of 2):

Sometimes, it becomes necessary to determine the memcg tracepoint event
that has occurred.  This is particularly relevant in scenarios involving a
large cgroup hierarchy, where users may wish to trace the process of
reclamation within a specific cgroup(s) by applying a filter.

The function cgroup_ino() is a useful tool for this purpose.  To integrate
cgroup_ino() into the existing memcg tracepoints, this patch introduces a
new tracepoint template for the begin() and end() events.

Link: https://lkml.kernel.org/r/20231123193937.11628-1-ddrokosov@xxxxxxxxxxxxxxxxx
Link: https://lkml.kernel.org/r/20231123193937.11628-2-ddrokosov@xxxxxxxxxxxxxxxxx
Signed-off-by: Dmitry Rokosov <ddrokosov@xxxxxxxxxxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Masami Hiramatsu (Google) <mhiramat@xxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Muchun Song <muchun.song@xxxxxxxxx>
Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>
Cc: Steven Rostedt (Google) <rostedt@xxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 include/trace/events/vmscan.h |   73 ++++++++++++++++++++++++++------
 mm/vmscan.c                   |   10 ++--
 2 files changed, 66 insertions(+), 17 deletions(-)

--- a/include/trace/events/vmscan.h~mm-memcg-print-out-cgroup-ino-in-the-memcg-tracepoints
+++ a/include/trace/events/vmscan.h
@@ -141,19 +141,45 @@ DEFINE_EVENT(mm_vmscan_direct_reclaim_be
 );
 
 #ifdef CONFIG_MEMCG
-DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_memcg_reclaim_begin,
 
-	TP_PROTO(int order, gfp_t gfp_flags),
+DECLARE_EVENT_CLASS(mm_vmscan_memcg_reclaim_begin_template,
 
-	TP_ARGS(order, gfp_flags)
+	TP_PROTO(int order, gfp_t gfp_flags, const struct mem_cgroup *memcg),
+
+	TP_ARGS(order, gfp_flags, memcg),
+
+	TP_STRUCT__entry(
+		__field(int, order)
+		__field(unsigned long, gfp_flags)
+		__field(ino_t, ino)
+	),
+
+	TP_fast_assign(
+		__entry->order = order;
+		__entry->gfp_flags = (__force unsigned long)gfp_flags;
+		__entry->ino = cgroup_ino(memcg->css.cgroup);
+	),
+
+	TP_printk("order=%d gfp_flags=%s memcg=%ld",
+		__entry->order,
+		show_gfp_flags(__entry->gfp_flags),
+		__entry->ino)
 );
 
-DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_memcg_softlimit_reclaim_begin,
+DEFINE_EVENT(mm_vmscan_memcg_reclaim_begin_template, mm_vmscan_memcg_reclaim_begin,
 
-	TP_PROTO(int order, gfp_t gfp_flags),
+	TP_PROTO(int order, gfp_t gfp_flags, const struct mem_cgroup *memcg),
 
-	TP_ARGS(order, gfp_flags)
+	TP_ARGS(order, gfp_flags, memcg)
 );
+
+DEFINE_EVENT(mm_vmscan_memcg_reclaim_begin_template, mm_vmscan_memcg_softlimit_reclaim_begin,
+
+	TP_PROTO(int order, gfp_t gfp_flags, const struct mem_cgroup *memcg),
+
+	TP_ARGS(order, gfp_flags, memcg)
+);
+
 #endif /* CONFIG_MEMCG */
 
 DECLARE_EVENT_CLASS(mm_vmscan_direct_reclaim_end_template,
@@ -181,19 +207,42 @@ DEFINE_EVENT(mm_vmscan_direct_reclaim_en
 );
 
 #ifdef CONFIG_MEMCG
-DEFINE_EVENT(mm_vmscan_direct_reclaim_end_template, mm_vmscan_memcg_reclaim_end,
 
-	TP_PROTO(unsigned long nr_reclaimed),
+DECLARE_EVENT_CLASS(mm_vmscan_memcg_reclaim_end_template,
 
-	TP_ARGS(nr_reclaimed)
+	TP_PROTO(unsigned long nr_reclaimed, const struct mem_cgroup *memcg),
+
+	TP_ARGS(nr_reclaimed, memcg),
+
+	TP_STRUCT__entry(
+		__field(unsigned long, nr_reclaimed)
+		__field(ino_t, ino)
+	),
+
+	TP_fast_assign(
+		__entry->nr_reclaimed = nr_reclaimed;
+		__entry->ino = cgroup_ino(memcg->css.cgroup);
+	),
+
+	TP_printk("nr_reclaimed=%lu memcg=%ld",
+		__entry->nr_reclaimed,
+		__entry->ino)
 );
 
-DEFINE_EVENT(mm_vmscan_direct_reclaim_end_template, mm_vmscan_memcg_softlimit_reclaim_end,
+DEFINE_EVENT(mm_vmscan_memcg_reclaim_end_template, mm_vmscan_memcg_reclaim_end,
 
-	TP_PROTO(unsigned long nr_reclaimed),
+	TP_PROTO(unsigned long nr_reclaimed, const struct mem_cgroup *memcg),
 
-	TP_ARGS(nr_reclaimed)
+	TP_ARGS(nr_reclaimed, memcg)
 );
+
+DEFINE_EVENT(mm_vmscan_memcg_reclaim_end_template, mm_vmscan_memcg_softlimit_reclaim_end,
+
+	TP_PROTO(unsigned long nr_reclaimed, const struct mem_cgroup *memcg),
+
+	TP_ARGS(nr_reclaimed, memcg)
+);
+
 #endif /* CONFIG_MEMCG */
 
 TRACE_EVENT(mm_shrink_slab_start,
--- a/mm/vmscan.c~mm-memcg-print-out-cgroup-ino-in-the-memcg-tracepoints
+++ a/mm/vmscan.c
@@ -6415,8 +6415,8 @@ unsigned long mem_cgroup_shrink_node(str
 	sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
 			(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
 
-	trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order,
-						      sc.gfp_mask);
+	trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order, sc.gfp_mask,
+						      memcg);
 
 	/*
 	 * NOTE: Although we can get the priority field, using it
@@ -6427,7 +6427,7 @@ unsigned long mem_cgroup_shrink_node(str
 	 */
 	shrink_lruvec(lruvec, &sc);
 
-	trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed);
+	trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed, memcg);
 
 	*nr_scanned = sc.nr_scanned;
 
@@ -6461,13 +6461,13 @@ unsigned long try_to_free_mem_cgroup_pag
 	struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
 
 	set_task_reclaim_state(current, &sc.reclaim_state);
-	trace_mm_vmscan_memcg_reclaim_begin(0, sc.gfp_mask);
+	trace_mm_vmscan_memcg_reclaim_begin(0, sc.gfp_mask, memcg);
 	noreclaim_flag = memalloc_noreclaim_save();
 
 	nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
 
 	memalloc_noreclaim_restore(noreclaim_flag);
-	trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed);
+	trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed, memcg);
 	set_task_reclaim_state(current, NULL);
 
 	return nr_reclaimed;
_

Patches currently in -mm which might be from ddrokosov@xxxxxxxxxxxxxxxxx are

mm-memcg-print-out-cgroup-ino-in-the-memcg-tracepoints.patch
mm-memcg-introduce-new-event-to-trace-shrink_memcg.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux