On 12/20/22 10:41, Peter Newman wrote: > When creating a new monitoring group, the RMID allocated for it may have > been used by a group which was previously removed. In this case, the > hardware counters will have non-zero values which should be deducted > from what is reported in the new group's counts. > > resctrl_arch_reset_rmid() initializes the prev_msr value for counters to > 0, causing the initial count to be charged to the new group. Resurrect > __rmid_read() and use it to initialize prev_msr correctly. > > Unlike before, __rmid_read() checks for error bits in the MSR read so > that callers don't need to. > > Fixes: 1d81d15db39c ("x86/resctrl: Move mbm_overflow_count() into resctrl_arch_rmid_read()") > Signed-off-by: Peter Newman <peternewman@xxxxxxxxxx> > Reviewed-by: Reinette Chatre <reinette.chatre@xxxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx > --- > v3: > - add changelog > - CC stable > v2: > - move error bit processing into __rmid_read() > > v1: https://lore.kernel.org/lkml/20221207112924.3602960-1-peternewman@xxxxxxxxxx/ > v2: https://lore.kernel.org/lkml/20221214160856.2164207-1-peternewman@xxxxxxxxxx/ > --- > arch/x86/kernel/cpu/resctrl/monitor.c | 49 ++++++++++++++++++--------- > 1 file changed, 33 insertions(+), 16 deletions(-) > > diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c > index efe0c30d3a12..77538abeb72a 100644 > --- a/arch/x86/kernel/cpu/resctrl/monitor.c > +++ b/arch/x86/kernel/cpu/resctrl/monitor.c > @@ -146,6 +146,30 @@ static inline struct rmid_entry *__rmid_entry(u32 rmid) > return entry; > } > > +static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val) > +{ > + u64 msr_val; > + > + /* > + * As per the SDM, when IA32_QM_EVTSEL.EvtID (bits 7:0) is configured > + * with a valid event code for supported resource type and the bits > + * IA32_QM_EVTSEL.RMID (bits 41:32) are configured with valid RMID, > + * IA32_QM_CTR.data (bits 61:0) reports the monitored data. > + * IA32_QM_CTR.Error (bit 63) and IA32_QM_CTR.Unavailable (bit 62) > + * are error bits. > + */ > + wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid); > + rdmsrl(MSR_IA32_QM_CTR, msr_val); > + > + if (msr_val & RMID_VAL_ERROR) > + return -EIO; > + if (msr_val & RMID_VAL_UNAVAIL) > + return -EINVAL; > + > + *val = msr_val; > + return 0; > +} > + > static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_domain *hw_dom, > u32 rmid, > enum resctrl_event_id eventid) > @@ -172,8 +196,12 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d, > struct arch_mbm_state *am; > > am = get_arch_mbm_state(hw_dom, rmid, eventid); > - if (am) > + if (am) { > memset(am, 0, sizeof(*am)); > + > + /* Record any initial, non-zero count value. */ > + __rmid_read(rmid, eventid, &am->prev_msr); > + } > } > > static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width) > @@ -191,25 +219,14 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d, > struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d); > struct arch_mbm_state *am; > u64 msr_val, chunks; > + int ret; > > if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask)) > return -EINVAL; > > - /* > - * As per the SDM, when IA32_QM_EVTSEL.EvtID (bits 7:0) is configured > - * with a valid event code for supported resource type and the bits > - * IA32_QM_EVTSEL.RMID (bits 41:32) are configured with valid RMID, > - * IA32_QM_CTR.data (bits 61:0) reports the monitored data. > - * IA32_QM_CTR.Error (bit 63) and IA32_QM_CTR.Unavailable (bit 62) > - * are error bits. > - */ > - wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid); > - rdmsrl(MSR_IA32_QM_CTR, msr_val); > - > - if (msr_val & RMID_VAL_ERROR) > - return -EIO; > - if (msr_val & RMID_VAL_UNAVAIL) > - return -EINVAL; > + ret = __rmid_read(rmid, eventid, &msr_val); > + if (ret) > + return ret; > > am = get_arch_mbm_state(hw_dom, rmid, eventid); > if (am) { > > base-commit: 830b3c68c1fb1e9176028d02ef86f3cf76aa2476 Tested the patches on AMD systems. Looks good. Tested-by: Babu Moger <babu.moger@xxxxxxx Thanks Babu Moger