Hi Reinette,
On 10/15/2024 10:25 PM, Reinette Chatre wrote:
Hi Babu,
On 10/9/24 10:39 AM, Babu Moger wrote:
The mbm_cntr_assign mode offers several hardware counters that can be
assigned to an RMID-event pair and monitor the bandwidth as long as it
repeated nit (to be consistent): RMID, event
Sure.
is assigned.
Counters are managed at two levels. The global assignment is tracked
using the mbm_cntr_free_map field in the struct resctrl_mon, while
domain-specific assignments are tracked using the mbm_cntr_map field
in the struct rdt_mon_domain. Allocation begins at the global level
and is then applied individually to each domain.
Introduce an interface to allocate these counters and update the
corresponding domains accordingly.
Signed-off-by: Babu Moger <babu.moger@xxxxxxx>
---
v8: Renamed rdtgroup_assign_cntr() to rdtgroup_assign_cntr_event().
Added the code to return the error if rdtgroup_assign_cntr_event fails.
Moved definition of MBM_EVENT_ARRAY_INDEX to resctrl/internal.h.
Updated typo in the comments.
v7: New patch. Moved all the FS code here.
Merged rdtgroup_assign_cntr and rdtgroup_alloc_cntr.
Adde new #define MBM_EVENT_ARRAY_INDEX.
---
arch/x86/kernel/cpu/resctrl/internal.h | 9 +++++
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 47 ++++++++++++++++++++++++++
2 files changed, 56 insertions(+)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 6d4df0490186..900e18aea2c4 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -67,6 +67,13 @@
#define MON_CNTR_UNSET U32_MAX
+/*
+ * Get the counter index for the assignable counter
+ * 0 for evtid == QOS_L3_MBM_TOTAL_EVENT_ID
+ * 1 for evtid == QOS_L3_MBM_LOCAL_EVENT_ID
+ */
+#define MBM_EVENT_ARRAY_INDEX(_event) ((_event) - 2)
+
This can be moved to patch that introduces and initializes the array and used there.
Sure.
/**
* cpumask_any_housekeeping() - Choose any CPU in @mask, preferring those that
* aren't marked nohz_full
@@ -708,6 +715,8 @@ unsigned int mon_event_config_index_get(u32 evtid);
int resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
enum resctrl_event_id evtid, u32 rmid, u32 closid,
u32 cntr_id, bool assign);
+int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
+ struct rdt_mon_domain *d, enum resctrl_event_id evtid);
void rdt_staged_configs_clear(void);
bool closid_allocated(unsigned int closid);
int resctrl_find_cleanest_closid(void);
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 4ab1a18010c9..e4f628e6fe65 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -1898,6 +1898,53 @@ int resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domain *d,
return 0;
}
+/*
+ * Assign a hardware counter to the group.
hmmm ... counters are not assigned to groups. How about:
"Assign a hardware counter to event @evtid of group @rdtgrp"?
Sure.
+ * Counter will be assigned to all the domains if rdt_mon_domain is NULL
+ * else the counter will be allocated to specific domain.
"will be allocated to" -> "will be assigned to"?
Sure.
+ */
+int rdtgroup_assign_cntr_event(struct rdt_resource *r, struct rdtgroup *rdtgrp,
+ struct rdt_mon_domain *d, enum resctrl_event_id evtid)
+{
+ int index = MBM_EVENT_ARRAY_INDEX(evtid);
+ int cntr_id = rdtgrp->mon.cntr_id[index];
+ int ret;
+
+ /*
+ * Allocate a new counter id to the event if the counter is not
+ * assigned already.
+ */
+ if (cntr_id == MON_CNTR_UNSET) {
+ cntr_id = mbm_cntr_alloc(r);
+ if (cntr_id < 0) {
+ rdt_last_cmd_puts("Out of MBM assignable counters\n");
+ return -ENOSPC;
+ }
+ rdtgrp->mon.cntr_id[index] = cntr_id;
+ }
+
+ if (!d) {
+ list_for_each_entry(d, &r->mon_domains, hdr.list) {
+ ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
+ rdtgrp->closid, cntr_id, true);
+ if (ret)
+ goto out_done_assign;
+
+ set_bit(cntr_id, d->mbm_cntr_map);
The code pattern above is repeated four times in this work, twice in
rdtgroup_assign_cntr_event() and twice in rdtgroup_unassign_cntr_event(). This
duplication should be avoided. It can be done in a function that also resets
the architectural state.
Are you suggesting to combine rdtgroup_assign_cntr_event() and
rdtgroup_unassign_cntr_event()?
It can be done. We need a flag to tell if it is a assign or unassign.
+ }
+ } else {
+ ret = resctrl_arch_config_cntr(r, d, evtid, rdtgrp->mon.rmid,
+ rdtgrp->closid, cntr_id, true);
+ if (ret)
+ goto out_done_assign;
+
+ set_bit(cntr_id, d->mbm_cntr_map);
+ }
+
+out_done_assign:
Should a newly allocated counter not be freed if it could not be configured?
Yes.
+ return ret;
+}
+
/* rdtgroup information files for one cache resource. */
static struct rftype res_common_files[] = {
{
Reinette
--
- Babu Moger