On 7/11/24 7:07 PM, Waiman Long wrote: > Cgroup subsystem state (CSS) is an abstraction in the cgroup layer to > help manage different structures in various cgroup subsystems by being > an embedded element inside a larger structure like cpuset or mem_cgroup. > > The /proc/cgroups file shows the number of cgroups for each of the > subsystems. With cgroup v1, the number of CSSes is the same as the > number of cgroups. That is not the case anymore with cgroup v2. The > /proc/cgroups file cannot show the actual number of CSSes for the > subsystems that are bound to cgroup v2. > > So if a v2 cgroup subsystem is leaking cgroups (usually memory cgroup), > we can't tell by looking at /proc/cgroups which cgroup subsystems may > be responsible. > > As cgroup v2 had deprecated the use of /proc/cgroups, the hierarchical > cgroup.stat file is now being extended to show the number of live and > dying CSSes associated with all the non-inhibited cgroup subsystems > that have been bound to cgroup v2 as long as it is not zero. The number > includes CSSes in the current cgroup as well as in all the descendants > underneath it. This will help us pinpoint which subsystems are > responsible for the increasing number of dying (nr_dying_descendants) > cgroups. > > The cgroup-v2.rst file is updated to discuss this new behavior. > > With this patch applied, a sample output from root cgroup.stat file > was shown below. > > nr_descendants 55 > nr_dying_descendants 35 > nr_subsys_cpuset 1 > nr_subsys_cpu 40 > nr_subsys_io 40 > nr_subsys_memory 55 > nr_dying_subsys_memory 35 > nr_subsys_perf_event 56 > nr_subsys_hugetlb 1 > nr_subsys_pids 55 > nr_subsys_rdma 1 > nr_subsys_misc 1 > > Another sample output from system.slice/cgroup.stat was: > > nr_descendants 32 > nr_dying_descendants 33 > nr_subsys_cpu 30 > nr_subsys_io 30 > nr_subsys_memory 32 > nr_dying_subsys_memory 33 > nr_subsys_perf_event 33 > nr_subsys_pids 32 > > Signed-off-by: Waiman Long <longman@xxxxxxxxxx> > Acked-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> Reviewed-by: Kamalesh Babulal <kamalesh.babulal@xxxxxxxxxx> -- Thanks, Kamalesh