Re: [PATCH v3 1/2] cgroup: Show # of subsystem CSSes in cgroup.stat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/10/24 17:43, Roman Gushchin wrote:
On Wed, Jul 10, 2024 at 02:23:52PM -0400, Waiman Long wrote:
Cgroup subsystem state (CSS) is an abstraction in the cgroup layer to
help manage different structures in various cgroup subsystems by being
an embedded element inside a larger structure like cpuset or mem_cgroup.

The /proc/cgroups file shows the number of cgroups for each of the
subsystems.  With cgroup v1, the number of CSSes is the same as the
number of cgroups.  That is not the case anymore with cgroup v2. The
/proc/cgroups file cannot show the actual number of CSSes for the
subsystems that are bound to cgroup v2.

So if a v2 cgroup subsystem is leaking cgroups (usually memory cgroup),
we can't tell by looking at /proc/cgroups which cgroup subsystems may
be responsible.

As cgroup v2 had deprecated the use of /proc/cgroups, the hierarchical
cgroup.stat file is now being extended to show the number of live and
dying CSSes associated with all the non-inhibited cgroup subsystems
that have been bound to cgroup v2 as long as it is not zero.  The number
includes CSSes in the current cgroup as well as in all the descendants
underneath it.  This will help us pinpoint which subsystems are
responsible for the increasing number of dying (nr_dying_descendants)
cgroups.

The cgroup-v2.rst file is updated to discuss this new behavior.

With this patch applied, a sample output from root cgroup.stat file
was shown below.

	nr_descendants 54
	nr_dying_descendants 44
	nr_cpuset 1
	nr_cpu 40
	nr_io 40
	nr_memory 54
	nr_dying_memory 44
	nr_perf_event 55
	nr_hugetlb 1
	nr_pids 54
	nr_rdma 1
	nr_misc 1

Another sample output from system.slice/cgroup.stat was:

	nr_descendants 32
	nr_dying_descendants 37
	nr_cpu 30
	nr_io 30
	nr_memory 32
	nr_dying_memory 37
	nr_perf_event 33
	nr_pids 32

Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
I like it way more than the previous version, thank you for the update.

---
  Documentation/admin-guide/cgroup-v2.rst | 14 ++++++-
  include/linux/cgroup-defs.h             |  7 ++++
  kernel/cgroup/cgroup.c                  | 50 ++++++++++++++++++++++++-
  3 files changed, 68 insertions(+), 3 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 52763d6b2919..9031419271cd 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -981,6 +981,16 @@ All cgroup core files are prefixed with "cgroup."
  		A dying cgroup can consume system resources not exceeding
  		limits, which were active at the moment of cgroup deletion.
+ nr_<cgroup_subsys>
+		Total number of live cgroups associated with that cgroup
+		subsystem (e.g. memory) at and beneath the current
+		cgroup.  An entry will only be shown if it is not zero.
+
+	  nr_dying_<cgroup_subsys>
+		Total number of dying cgroups associated with that cgroup
+		subsystem (e.g. memory) beneath the current cgroup.
+		An entry will only be shown if it is not zero.
+
    cgroup.freeze
  	A read-write single value file which exists on non-root cgroups.
  	Allowed values are "0" and "1". The default is "0".
@@ -2930,8 +2940,8 @@ Deprecated v1 Core Features
- "cgroup.clone_children" is removed. -- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
-  at the root instead.
+- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" or
+  "cgroup.stat" files at the root instead.
Issues with v1 and Rationales for v2
diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index b36690ca0d3f..62de18874508 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -210,6 +210,13 @@ struct cgroup_subsys_state {
  	 * fields of the containing structure.
  	 */
  	struct cgroup_subsys_state *parent;
+
+	/*
+	 * Keep track of total numbers of visible and dying descendant CSSes.
+	 * Protected by cgroup_mutex.
+	 */
+	int nr_descendants;
+	int nr_dying_descendants;
  };
/*
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index c8e4b62b436a..18c982a06446 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -3669,12 +3669,34 @@ static int cgroup_events_show(struct seq_file *seq, void *v)
  static int cgroup_stat_show(struct seq_file *seq, void *v)
  {
  	struct cgroup *cgroup = seq_css(seq)->cgroup;
+	struct cgroup_subsys_state *css;
+	int ssid;
+ /* cgroup_mutex required for for_each_css() */
+	cgroup_lock();
I *guess* it can be done under a rcu_read_lock(), isn't it?
That would eliminate a need for the second patch as well, which
is questionable (e.g. one unprivileged user can block others?)

I am just following the instruction in the for_each_css() macro:

 *
 * Should be called under cgroup_mutex.
 */

I think taking rcu_read_lock() should also work in this case. Will try it out and update the patch after some testing.

Thanks,
Longman





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux