Re: [RFC PATCH v4 0/3] memcg weighted interleave mempolicy control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 05, 2023 at 05:01:51PM +0800, Huang, Ying wrote:
> Gregory Price <gregory.price@xxxxxxxxxxxx> writes:
> 
> > On Mon, Dec 04, 2023 at 04:19:02PM +0800, Huang, Ying wrote:
> >> Gregory Price <gregory.price@xxxxxxxxxxxx> writes:
> >> 
> >> > If the structure is built as a matrix of (cpu_node,mem_nodes),
> >> > the you can also optimize based on the node the task is running on.
> >> 
> >> The matrix stuff makes the situation complex.  If people do need
> >> something like that, they can just use set_memorypolicy2() with user
> >> specified weights.  I still believe that "make simple stuff simple, and
> >> complex stuff possible".
> >> 
> >
> > I don't think it's particularly complex, since we already have a
> > distance matrix for numa nodes:
> >
> > available: 2 nodes (0-1)
> > ... snip ...
> > node distances:
> > node   0   1
> >   0:  10  21
> >   1:  21  10
> >
> > This would follow the same thing, just adjustable for bandwidth.
> 
> We add complexity for requirement. Not there's something similar
> already.
> 
> > I personally find the (src,dst) matrix very important for flexibility.
> 
> With set_memorypolicy2(), I think we have the needed flexibility for
> users needs the complexity.
> 
> > But if there is particular pushback against it, having a one dimensional
> > array is better than not having it, so I will take what I can get.
> 
> TBH, I don't think that we really need that.  Especially given we will
> have set_memorypolicy2().
>

>From a complexity standpoint, it is exactly as complex as the hardware
configuration itself:  each socket has a different view of the memory
topology. If you have a non-homogeneous memory configuration (e.g. a 
different number of CXL expanders on one socket thant he other), a flat
array of weights has no way of capturing this hardware configuration.

That makes the feature significantly less useful. In fact, it makes the
feature equivalent to set_mempolicy2 - except that weights could be
changed at runtime from outside a process.


A matrix resolves one very specific use case: task migration


set_mempolicy2 is not sufficient to solve this.  There is presently no
way for an external task to change the mempolicy of an existing task.
That means a task must become "migration aware" to use weighting in the
context of containers where migrations are likely.

Two things to consider: A task...
   a) has no way of knowing a migration occured
   b) may not have visibility of numa nodes outside its cpusets prior to
      a migration - making it unlikely/not possible for them to set
      weights correctly in the event a migration occurs.

If a server with 2 sockets is set up non-homogeneously (different amount
of CXL memory expanders on each socket), then the effective bandwidth
distribution between sockets will be different.

If a container is migrated between sockets in this situation, then tasks
with manually set weights, or if global weights are a single array, will
have poor memory distributions in relation to the new view of the system.

Requiring the global settings to be an array basically requires global
weights to be sub-optimal for any use cases that is not explicitly a
single workload that consumes all the cores on the system.

If the system provides a matrix, then the global settings can be optimal
and re-weighting in response to migration happens cleanly and transparently.

~Gregory




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux