Re: [PATCH v7 1/5] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 03-08-21 13:59:18, Feng Tang wrote:
> From: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> 
> The NUMA APIs currently allow passing in a "preferred node" as a
> single bit set in a nodemask.  If more than one bit it set, bits
> after the first are ignored.
> 
> This single node is generally OK for location-based NUMA where
> memory being allocated will eventually be operated on by a single
> CPU.  However, in systems with multiple memory types, folks want
> to target a *type* of memory instead of a location.  For instance,
> someone might want some high-bandwidth memory but do not care about
> the CPU next to which it is allocated.  Or, they want a cheap,
> high capacity allocation and want to target all NUMA nodes which
> have persistent memory in volatile mode.  In both of these cases,
> the application wants to target a *set* of nodes, but does not
> want strict MPOL_BIND behavior as that could lead to OOM killer or
> SIGSEGV.
> 
> So add MPOL_PREFERRED_MANY policy to support the multiple preferred
> nodes requirement. This is not a pie-in-the-sky dream for an API.
> This was a response to a specific ask of more than one group at Intel.
> Specifically:
> 
> 1. There are existing libraries that target memory types such as
>    https://github.com/memkind/memkind.  These are known to suffer
>    from SIGSEGV's when memory is low on targeted memory "kinds" that
>    span more than one node.  The MCDRAM on a Xeon Phi in "Cluster on
>    Die" mode is an example of this.
> 2. Volatile-use persistent memory users want to have a memory policy
>    which is targeted at either "cheap and slow" (PMEM) or "expensive and
>    fast" (DRAM).  However, they do not want to experience allocation
>    failures when the targeted type is unavailable.
> 3. Allocate-then-run.  Generally, we let the process scheduler decide
>    on which physical CPU to run a task.  That location provides a
>    default allocation policy, and memory availability is not generally
>    considered when placing tasks.  For situations where memory is
>    valuable and constrained, some users want to allocate memory first,
>    *then* allocate close compute resources to the allocation.  This is
>    the reverse of the normal (CPU) model.  Accelerators such as GPUs
>    that operate on core-mm-managed memory are interested in this model.
> 
> A check is added in sanitize_mpol_flags() to not permit 'prefer_many'
> policy to be used for now, and will be removed in later patch after all
> implementations for 'prefer_many' are ready, as suggested by Michal Hocko.
> 
> [Michal Hocko: suggest to refine policy_node/policy_nodemask handling]
> Link: https://lore.kernel.org/r/20200630212517.308045-4-ben.widawsky@xxxxxxxxx
> Co-developed-by: Ben Widawsky <ben.widawsky@xxxxxxxxx>
> Signed-off-by: Ben Widawsky <ben.widawsky@xxxxxxxxx>
> Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx>

Acked-by: Michal Hocko <mhocko@xxxxxxxx>
Thanks!
-- 
Michal Hocko
SUSE Labs



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux