Re: [PATCH v2 00/17] x86/resctrl : Support AMD Assignable Bandwidth Monitoring Counters (ABMC)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi James,

On Tue, Feb 20, 2024 at 7:21 AM James Morse <james.morse@xxxxxxx> wrote:
> On 16/02/2024 20:18, Peter Newman wrote:
> > On Thu, Feb 8, 2024 at 9:29 AM Moger, Babu <babu.moger@xxxxxxx> wrote:
> >> On 2/5/24 16:38, Reinette Chatre wrote:
> >>> You have made it clear on several occasions that you do not intend to support
> >>> domain level assignment. That may be ok but the interface you create should
> >>> not prevent future support of domain level assignment.
> >>>
> >>> If my point is not clear, could you please share how this interface is able to
> >>> support domain level assignment in the future?
> >>>
> >>> I am starting to think that we need a file similar to the schemata file
> >>> for group and domain level monitor configurations.
> >>
> >> Something like this?
> >>
> >> By default
> >> #cat /sys/fs/resctrl/monitor_state
> >> default:0=total=assign,local=assign;1=total=assign,local=assign
> >>
> >> With ABMC,
> >> #cat /sys/fs/resctrl/monitor_state
> >> ABMC:0=total=unassign,local=unassign;1=total=unassign,local=unassign
> >
> > The benefit from all the string parsing in this interface is only
> > halving the number of monitor_state sysfs writes we'd need compared to
> > creating a separate file for mbm_local and mbm_total. Given that our
> > use case is to assign the 32 assignable counters to read the bandwidth
> > of ~256 monitoring groups, this isn't a substantial gain to help us. I
> > think you should just focus on providing the necessary control
> > granularity without trying to consolidate writes in this interface. I
> > will propose an additional interface below to optimize our use case.
> >
> > Whether mbm_total and mbm_local are combined in the group directories
> > or not, I don't see why you wouldn't just repeat the same file
> > interface in the domain directories for a user needing finer-grained
> > controls.
>
> I don't follow why this has to be done globally. resctrl allows CLOSID to have different
> configurations for different purposes between different domains (as long as tasks are
> pinned to CPUs). It feels a bit odd that these counters can't be considered as per-domain too.

Assigning to all domains at once would allow us to better parallelize
the resulting IPIs when we do need to iterate a small set of monitors
over a large list of groups.


> > I prototyped and tested the following additional interface for the
> > large-scale, batch use case that we're primarily concerned about:
> >
> > info/L3_MON/mbm_{local,total}_bytes_assigned
> >
> > Writing a whitespace-delimited list of mongroup directory paths does
>
> | mkdir /sys/fs/resctrl/my\ group
>
> string parsing in the kernel is rarely fun!

Hopefully restricting to a newline-delimited list will keep this fun
and easy then.

Otherwise if referring to many groups in a single write isn't a viable
path forward, I'll still need to find a way to address the
fs/syscall/IPI overhead of measuring the bandwidth of a large number
of groups.

>
>
> > the following:
> > 1. unassign all monitors for the given counter
> > 2. assigns a monitor to each mongroup referenced in the write
> > 3. batches per-domain register updates resulting from the assignments
> > into a single IPI for each domain
> >
> > This interface allows us to do less sysfs writes and IPIs on systems
> > with more assignable monitoring resources, rather than doing more.
> >
> > The reference to a mongroup when reading/writing the above node is the
> > resctrl-root-relative path to the monitoring group. There is probably
> > a more concise way to refer to the groups, but my prototype used
> > kernfs_walk_and_get() to locate each rdtgroup struct.
>
> If this file were re-used for finding where the monitors were currently allocated, using
> the name would be a natural fit for building a path to un-assign one group.
>
>
> > I would also like to add that in the software-ABMC prototype I made,
> > because it's based on assignment of a small number of RMIDs,
> > assignment results in all counters being assigned at once. On
> > implementations where per-counter assignments aren't possible,
> > assignment through such a resource would be allowed to assign more
> > resources than explicitly requested.
> >
> > This would allow an implementation only capable of global assignment
>
> Do we know if this exists? Given the configurations have to be different for a domain, I'd
> be surprised if counter configuration is somehow distributed between domains.

It's currently only a proposal[1] for mitigating the context switch
overhead cost of soft RMIDs. I'm looking at the other alternative
first, though.


> > to assign resources to all groups when a non-empty string is written
> > to the proposed file nodes, and all resources to be unassigned when an
> > empty string is written. Reading back from the file nodes would tell
> > the user how much was actually assigned.
>
> What do you mean by 'how much', is this allow to fail early? That feels a bit
> counter-intuitive. As this starts with a reset, if the number of counters is known - it
> should be easy for user-space to know it can only write X tokens into that file.

I was referring to the operation assigning more groups than requested
if the implementation is only capable of a master enable/disable for
all monitoring: reading back would indicate that all monitoring groups
are in the assigned list.

There would otherwise be an interface telling the user how many
monitors can be assigned, so there's no reason to expect this
operation to fail, short of the user doing something silly like
deleting a group while it's concurrently being assigned.

-Peter

[1] https://lore.kernel.org/lkml/CALPaoCiRD6j_Rp7ffew+PtGTF4rWDORwbuRQqH2i-cY5SvWQBg@xxxxxxxxxxxxxx/





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux