Hi Reinette, On Tue, May 16, 2023 at 5:06 PM Reinette Chatre <reinette.chatre@xxxxxxxxx> wrote: > On 5/15/2023 7:42 AM, Peter Newman wrote: > > > > I used a simple parent-child pipe loop benchmark with the parent in > > one monitoring group and the child in another to trigger 2M > > context-switches on the same CPU and compared the sample-based > > profiles on an AMD and Intel implementation. I used perf diff to > > compare the samples between hard and soft RMID switches. > > > > Intel(R) Xeon(R) Platinum 8173M CPU @ 2.00GHz: > > > > +44.80% [kernel.kallsyms] [k] __rmid_read > > 10.43% -9.52% [kernel.kallsyms] [k] __switch_to > > > > AMD EPYC 7B12 64-Core Processor: > > > > +28.27% [kernel.kallsyms] [k] __rmid_read > > 13.45% -13.44% [kernel.kallsyms] [k] __switch_to > > > > Note that a soft RMID switch that doesn't change CLOSID skips the > > PQR_ASSOC write completely, so from this data I can roughly say that > > __rmid_read() is a little over 2x the length of a PQR_ASSOC write that > > changes the current RMID on the AMD implementation and about 4.5x > > longer on Intel. > > > > Let me know if this clarifies the cost enough or if you'd like to also > > see instrumented measurements on the individual WRMSR/RDMSR > > instructions. > > I can see from the data the portion of total time spent in __rmid_read(). > It is not clear to me what the impact on a context switch is. Is it > possible to say with this data that: this solution makes a context switch > x% slower? > > I think it may be optimistic to view this as a replacement of a PQR write. > As you point out, that requires that a CPU switches between tasks with the > same CLOSID. You demonstrate that resctrl already contributes a significant > delay to __switch_to - this work will increase that much more, it has to > be clear about this impact and motivate that it is acceptable. We were operating under the assumption that if the overhead wasn't acceptable, we would have heard complaints about it by now, but we ultimately learned that this feature wasn't deployed as much as we had originally thought on AMD hardware and that the overhead does need to be addressed. I am interested in your opinion on two options I'm exploring to mitigate the overhead, both of which depend on an API like the one Babu recently proposed for the AMD ABMC feature [1], where a new file interface will allow the user to indicate which mon_groups are actively being measured. I will refer to this as "assigned" for now, as that's the current proposal. The first is likely the simpler approach: only read MBM event counters which have been marked as "assigned" in the filesystem to avoid paying the context switch cost on tasks in groups which are not actively being measured. In our use case, we calculate memory bandwidth on every group every few minutes by reading the counters twice, 5 seconds apart. We would just need counters read during this 5-second window. The second involves avoiding the situation where a hardware counter could be deallocated: Determine the number of simultaneous RMIDs supported, reduce the effective number of RMIDs available to that number. Use the default RMID (0) for all "unassigned" monitoring groups and report "Unavailable" on all counter reads (and address the default monitoring group's counts being unreliable). When assigned, attempt to allocate one of the remaining, usable RMIDs to that group. It would only be possible to assign all event counters (local, total, occupancy) at the same time. Using this approach, we would no longer be able to measure all groups at the same time, but this is something we would already be accepting when using the AMD ABMC feature. While the second feature is a lot more disruptive at the filesystem layer, it does eliminate the added context switch overhead. Also, it may be helpful in the long run for the filesystem code to start taking a more abstract view of hardware monitoring resources, given that few implementations can afford to assign hardware to all monitoring IDs all the time. In both cases, the meaning of "assigned" could vary greatly, even among AMD implementations. Thanks! -Peter [1] https://lore.kernel.org/lkml/20231201005720.235639-1-babu.moger@xxxxxxx/