Re: [PATCH v5 1/9] mm/demotion: Add support for explicit memory tiers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 10, 2022 at 10:57:08AM +0100, Jonathan Cameron wrote:
> On Thu, 9 Jun 2022 16:41:04 -0400
> Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
> > On Thu, Jun 09, 2022 at 03:22:43PM +0100, Jonathan Cameron wrote:
> > Would it make more sense to have the platform/devicetree/driver
> > provide more fine-grained distance values similar to NUMA distances,
> > and have a driver-scope tunable to override/correct? And then have the
> > distance value function as the unique tier ID and rank in one.
> 
> Absolutely a good thing to provide that information, but it's black
> magic. There are too many contradicting metrics (latency vs bandwidth etc)
> even not including a more complex system model like Jerome Glisse proposed
> a few years back. https://lore.kernel.org/all/20190118174512.GA3060@xxxxxxxxxx/
> CXL 2.0 got this more right than anything else I've seen as provides
> discoverable topology along with details like latency to cross between
> particular switch ports.  Actually using that data (other than by throwing
> it to userspace controls for HPC apps etc) is going to take some figuring out.
> Even the question of what + how we expose this info to userspace is non
> obvious.

Right, I don't think those would be scientifically accurate - but
neither is a number between 1 and 3. The way I look at it is more
about spreading out the address space a bit, to allow expressing
nuanced differences without risking conflicts and overlaps. Hopefully
this results in the shipped values stabilizing over time and thus
requiring less and less intervention and overriding from userspace.

> > Going further, it could be useful to separate the business of hardware
> > properties (and configuring quirks) from the business of configuring
> > MM policies that should be applied to the resulting tier hierarchy.
> > They're somewhat orthogonal tuning tasks, and one of them might become
> > obsolete before the other (if the quality of distance values provided
> > by drivers improves before the quality of MM heuristics ;). Separating
> > them might help clarify the interface for both designers and users.
> > 
> > E.g. a memdev class scope with a driver-wide distance value, and a
> > memdev scope for per-device values that default to "inherit driver
> > value". The memtier subtree would then have an r/o structure, but
> > allow tuning per-tier interleaving ratio[1], demotion rules etc.
> 
> Ok that makes sense.  I'm not sure if that ends up as an implementation
> detail, or effects the userspace interface of this particular element.
> 
> I'm not sure completely read only is flexible enough (though mostly RO is fine)
> as we keep sketching out cases where any attempt to do things automatically
> does the wrong thing and where we need to add an extra tier to get
> everything to work.  Short of having a lot of tiers I'm not sure how
> we could have the default work well.  Maybe a lot of "tiers" is fine
> though perhaps we need to rename them if going this way and then they
> don't really work as current concept of tier.
> 
> Imagine a system with subtle difference between different memories such
> as 10% latency increase for same bandwidth.  To get an advantage from
> demoting to such a tier will require really stable usage and long
> run times. Whilst you could design a demotion scheme that takes that
> into account, I think we are a long way from that today.

Good point: there can be a clear hardware difference, but it's a
policy choice whether the MM should treat them as one or two tiers.

What do you think of a per-driver/per-device (overridable) distance
number, combined with a configurable distance cutoff for what
constitutes separate tiers. E.g. cutoff=20 means two devices with
distances of 10 and 20 respectively would be in the same tier, devices
with 10 and 100 would be in separate ones. The kernel then generates
and populates the tiers based on distances and grouping cutoff, and
populates the memtier directory tree and nodemasks in sysfs.

It could be simple tier0, tier1, tier2 numbering again, but the
numbers now would mean something to the user. A rank tunable is no
longer necessary.

I think even the nodemasks in the memtier tree could be read-only
then, since corrections should only be necessary when either the
device distance is wrong or the tier grouping cutoff.

Can you think of scenarios where that scheme would fall apart?




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux