When a user creates a control or monitor group, the CLOSID or RMID are not visible to the user. These are architecturally defined entities. There is no harm in displaying these in resctrl groups. Sometimes it can help to debug the issues. Add CLOSID and RMID to the control/monitor groups display in resctrl interface. $cat /sys/fs/resctrl/clos1/closid 1 $cat /sys/fs/resctrl/mon_groups/mon1/rmid 3 Signed-off-by: Babu Moger <babu.moger@xxxxxxx> --- Documentation/x86/resctrl.rst | 17 ++++++++++++ arch/x86/kernel/cpu/resctrl/rdtgroup.c | 44 ++++++++++++++++++++++++++++++++ 2 files changed, 61 insertions(+) diff --git a/Documentation/x86/resctrl.rst b/Documentation/x86/resctrl.rst index 25203f20002d..67eae74fe40c 100644 --- a/Documentation/x86/resctrl.rst +++ b/Documentation/x86/resctrl.rst @@ -321,6 +321,15 @@ All groups contain the following files: Just like "cpus", only using ranges of CPUs instead of bitmasks. +"rmid": + Available only with debug option.Reading this file shows the + Resource Monitoring ID (RMID) for monitoring the resource + utilization. Monitoring is performed by tagging each core (or + thread) or process via a RMID. Kernel assigns a new RMID when + a group is created depending on the available RMIDs. Multiple + cores (or threads) or processes can share a same RMID in a resctrl + domain. + When control is enabled all CTRL_MON groups will also contain: "schemata": @@ -342,6 +351,14 @@ When control is enabled all CTRL_MON groups will also contain: file. On successful pseudo-locked region creation the mode will automatically change to "pseudo-locked". +"closid": + Available only with debug option. Reading this file shows the + Class of Service (CLOS) id which acts as a resource control tag + on which the resources can be throttled. Kernel assigns a new + CLOSID a control group is created depending on the available + CLOSIDs. Multiple cores(or threads) or processes can share a + same CLOSID in a resctrl domain. + When monitoring is enabled all MON groups will also contain: "mon_data": diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index 1eb538965bd3..389d64b42704 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -760,6 +760,38 @@ static int rdtgroup_tasks_show(struct kernfs_open_file *of, return ret; } +static int rdtgroup_closid_show(struct kernfs_open_file *of, + struct seq_file *s, void *v) +{ + struct rdtgroup *rdtgrp; + int ret = 0; + + rdtgrp = rdtgroup_kn_lock_live(of->kn); + if (rdtgrp) + seq_printf(s, "%u\n", rdtgrp->closid); + else + ret = -ENOENT; + rdtgroup_kn_unlock(of->kn); + + return ret; +} + +static int rdtgroup_rmid_show(struct kernfs_open_file *of, + struct seq_file *s, void *v) +{ + struct rdtgroup *rdtgrp; + int ret = 0; + + rdtgrp = rdtgroup_kn_lock_live(of->kn); + if (rdtgrp) + seq_printf(s, "%u\n", rdtgrp->mon.rmid); + else + ret = -ENOENT; + rdtgroup_kn_unlock(of->kn); + + return ret; +} + #ifdef CONFIG_PROC_CPU_RESCTRL /* @@ -1821,6 +1853,12 @@ static struct rftype res_common_files[] = { .seq_show = rdtgroup_tasks_show, .fflags = RFTYPE_BASE, }, + { + .name = "rmid", + .mode = 0444, + .kf_ops = &rdtgroup_kf_single_ops, + .seq_show = rdtgroup_rmid_show, + }, { .name = "schemata", .mode = 0644, @@ -1844,6 +1882,12 @@ static struct rftype res_common_files[] = { .seq_show = rdtgroup_size_show, .fflags = RFTYPE_BASE_CTRL, }, + { + .name = "closid", + .mode = 0444, + .kf_ops = &rdtgroup_kf_single_ops, + .seq_show = rdtgroup_closid_show, + }, };