On Wed, May 25, 2022 at 01:25:58PM -0700, Roman Gushchin wrote: > Add a document describing the shrinker debugfs interface. > > Signed-off-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> > --- > Documentation/admin-guide/mm/index.rst | 1 + > .../admin-guide/mm/shrinker_debugfs.rst | 100 ++++++++++++++++++ > 2 files changed, 101 insertions(+) > create mode 100644 Documentation/admin-guide/mm/shrinker_debugfs.rst > > diff --git a/Documentation/admin-guide/mm/index.rst b/Documentation/admin-guide/mm/index.rst > index c21b5823f126..1bd11118dfb1 100644 > --- a/Documentation/admin-guide/mm/index.rst > +++ b/Documentation/admin-guide/mm/index.rst > @@ -36,6 +36,7 @@ the Linux memory management. > numa_memory_policy > numaperf > pagemap > + shrinker_debugfs > soft-dirty > swap_numa > transhuge > diff --git a/Documentation/admin-guide/mm/shrinker_debugfs.rst b/Documentation/admin-guide/mm/shrinker_debugfs.rst > new file mode 100644 > index 000000000000..2033d696aa59 > --- /dev/null > +++ b/Documentation/admin-guide/mm/shrinker_debugfs.rst > @@ -0,0 +1,100 @@ > +.. _shrinker_debugfs: > + > +========================== > +Shrinker Debugfs Interface > +========================== > + > +Shrinker debugfs interface provides a visibility into the kernel memory > +shrinkers subsystem and allows to get information about individual shrinkers. > + > +For each shrinker registered in the system a directory in **<debugfs>/shrinker/** > +is created. The directory's name is composed from the shrinker's name and an > +unique id: e.g. *kfree_rcu-0* or *sb-xfs:vda1-36*. > + > +Each shrinker directory contains the **count** file, which allows to trigger > +the *count_objects()* callback for each memcg and numa node (if applicable). > + > +Usage: > +------ > + > +1. *List registered shrinkers* > + > + :: > + > + $ cd /sys/kernel/debug/shrinker/ > + $ ls > + dqcache-16 sb-hugetlbfs-17 sb-rootfs-2 sb-tmpfs-49 > + kfree_rcu-0 sb-hugetlbfs-33 sb-securityfs-6 sb-tracefs-13 > + sb-aio-20 sb-iomem-12 sb-selinuxfs-22 sb-xfs:vda1-36 > + sb-anon_inodefs-15 sb-mqueue-21 sb-sockfs-8 sb-zsmalloc-19 > + sb-bdev-3 sb-nsfs-4 sb-sysfs-26 shadow-18 > + sb-bpf-32 sb-pipefs-14 sb-tmpfs-1 thp_deferred_split-10 > + sb-btrfs:vda2-24 sb-proc-25 sb-tmpfs-27 thp_zero-9 > + sb-cgroup2-30 sb-proc-39 sb-tmpfs-29 xfs_buf-vda1-37 > + sb-configfs-23 sb-proc-41 sb-tmpfs-35 xfs_inodegc-vda1-38 > + sb-dax-11 sb-proc-45 sb-tmpfs-40 zspool-zram0-34 > + sb-debugfs-7 sb-proc-46 sb-tmpfs-42 > + sb-devpts-28 sb-proc-47 sb-tmpfs-43 > + sb-devtmpfs-5 sb-pstore-31 sb-tmpfs-44 > + > +2. *Get information about a specific shrinker* > + > + :: > + > + $ cd sb-btrfs\:vda2-24/ > + $ ls > + count > + > +3. *Count objects* > + > + Each line in the output has the following format:: > + > + <cgroup inode id> <nr of objects on node 0> <nr of objects on node 1> ... > + <cgroup inode id> <nr of objects on node 0> <nr of objects on node 1> ... > + ... > + > + If there are no objects on all numa nodes, a line is omitted. If there > + are no objects at all, the output might be empty. Should we add the following lines into here? " If the shrinker is not memcg-aware or CONFIG_MEMCG is off, 0 is printed as cgroup inode id. If the shrinker is not numa-aware, 0's are printed for all nodes except the first one. " Thanks.