Hi James, On 8/16/24 11:30, James Morse wrote: > Hi Babu, > > On 06/08/2024 23:00, Babu Moger wrote: >> The ABMC feature provides an option to the user to assign a hardware >> counter to an RMID and monitor the bandwidth as long as it is assigned. >> The assigned RMID will be tracked by the hardware until the user unassigns >> it manually. >> >> Counters are configured by writing to L3_QOS_ABMC_CFG MSR and >> specifying the counter id, bandwidth source, and bandwidth types. >> >> Provide the interface to assign the counter ids to RMID. >> >> The feature details are documented in the APM listed below [1]. >> [1] AMD64 Architecture Programmer's Manual Volume 2: System Programming >> Publication # 24593 Revision 3.41 section 19.3.3.3 Assignable Bandwidth >> Monitoring (ABMC). > > >> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> index 60696b248b56..1ee91a7293a8 100644 >> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c >> @@ -1864,6 +1864,103 @@ static ssize_t mbm_local_bytes_config_write(struct kernfs_open_file *of, > >> +/* >> + * Send an IPI to the domain to assign the counter id to RMID. >> + */ >> +int resctrl_arch_assign_cntr(struct rdt_mon_domain *d, enum resctrl_event_id evtid, >> + u32 rmid, u32 cntr_id, u32 closid, bool assign) > > MPAM ends up with a per-resource array of monitor-ids that it uses to map cntr_id > allocated by resctrl to the underlying hardware id. Could this function pass the struct > rdt_resource too? > (this saves me having to assume its the L3 - adding to the technical debt in this area) Yes. We can pass rdt_resource. It will be 7 parameters for this function. Hope that is fine. > > Nit: could closid and rmid appear next to each other, and in that order ... just to fit > with other helpers that take both. Sure. > > >> +{ >> + struct rdt_hw_mon_domain *hw_dom = resctrl_to_arch_mon_dom(d); >> + union l3_qos_abmc_cfg abmc_cfg = { 0 }; >> + struct arch_mbm_state *arch_mbm; >> + >> + abmc_cfg.split.cfg_en = 1; >> + abmc_cfg.split.cntr_en = assign ? 1 : 0; >> + abmc_cfg.split.cntr_id = cntr_id; >> + abmc_cfg.split.bw_src = rmid; >> + >> + /* Update the event configuration from the domain */ >> + if (evtid == QOS_L3_MBM_TOTAL_EVENT_ID) { >> + abmc_cfg.split.bw_type = hw_dom->mbm_total_cfg; >> + arch_mbm = &hw_dom->arch_mbm_total[rmid]; >> + } else { >> + abmc_cfg.split.bw_type = hw_dom->mbm_local_cfg; >> + arch_mbm = &hw_dom->arch_mbm_local[rmid]; >> + } >> + >> + smp_call_function_any(&d->hdr.cpu_mask, rdtgroup_abmc_cfg, &abmc_cfg, 1); >> + >> + /* >> + * Reset the architectural state so that reading of hardware >> + * counter is not considered as an overflow in next update. >> + */ >> + if (arch_mbm) >> + memset(arch_mbm, 0, sizeof(struct arch_mbm_state)); >> + >> + return 0; >> +} > > > Thanks, > > James > -- Thanks Babu Moger