User visible APIs changes/additions should be posted to the linux-api mailing list. Now added. On Fri 19-06-20 09:24:07, Ben Widawsky wrote: > This patch series introduces the concept of the MPOL_PREFERRED_MANY mempolicy. > This mempolicy mode can be used with either the set_mempolicy(2) or mbind(2) > interfaces. Like the MPOL_PREFERRED interface, it allows an application to set a > preference for nodes which will fulfil memory allocation requests. Like the > MPOL_BIND interface, it works over a set of nodes. > > Summary: > 1-2: Random fixes I found along the way > 3-4: Logic to handle many preferred nodes in page allocation > 5-9: Plumbing to allow multiple preferred nodes in mempolicy > 10-13: Teach page allocation APIs about nodemasks > 14: Provide a helper to generate preferred nodemasks > 15: Have page allocation callers generate preferred nodemasks > 16-17: Flip the switch to have __alloc_pages_nodemask take preferred mask. > 18: Expose the new uapi > > Along with these patches are patches for libnuma, numactl, numademo, and memhog. > They still need some polish, but can be found here: > https://gitlab.com/bwidawsk/numactl/-/tree/prefer-many > It allows new usage: `numactl -P 0,3,4` > > The goal of the new mode is to enable some use-cases when using tiered memory > usage models which I've lovingly named. > 1a. The Hare - The interconnect is fast enough to meet bandwidth and latency > requirements allowing preference to be given to all nodes with "fast" memory. > 1b. The Indiscriminate Hare - An application knows it wants fast memory (or > perhaps slow memory), but doesn't care which node it runs on. The application > can prefer a set of nodes and then xpu bind to the local node (cpu, accelerator, > etc). This reverses the nodes are chosen today where the kernel attempts to use > local memory to the CPU whenever possible. This will attempt to use the local > accelerator to the memory. > 2. The Tortoise - The administrator (or the application itself) is aware it only > needs slow memory, and so can prefer that. > > Much of this is almost achievable with the bind interface, but the bind > interface suffers from an inability to fallback to another set of nodes if > binding fails to all nodes in the nodemask. > > Like MPOL_BIND a nodemask is given. Inherently this removes ordering from the > preference. > > > /* Set first two nodes as preferred in an 8 node system. */ > > const unsigned long nodes = 0x3 > > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8); > > > /* Mimic interleave policy, but have fallback *. > > const unsigned long nodes = 0xaa > > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8); > > Some internal discussion took place around the interface. There are two > alternatives which we have discussed, plus one I stuck in: > 1. Ordered list of nodes. Currently it's believed that the added complexity is > nod needed for expected usecases. > 2. A flag for bind to allow falling back to other nodes. This confuses the > notion of binding and is less flexible than the current solution. > 3. Create flags or new modes that helps with some ordering. This offers both a > friendlier API as well as a solution for more customized usage. It's unknown > if it's worth the complexity to support this. Here is sample code for how > this might work: > > > // Default > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_SOCKET, NULL, 0); > > // which is the same as > > set_mempolicy(MPOL_DEFAULT, NULL, 0); > > > > // The Hare > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, NULL, 0); > > > > // The Tortoise > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_REV, NULL, 0); > > > > // Prefer the fast memory of the first two sockets > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, -1, 2); > > > > // Prefer specific nodes for some something wacky > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_CUSTOM, 0x17c, 1024); > > --- > > Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Christoph Lameter <cl@xxxxxxxxx> > Cc: Dan Williams <dan.j.williams@xxxxxxxxx> > Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> > Cc: David Hildenbrand <david@xxxxxxxxxx> > Cc: David Rientjes <rientjes@xxxxxxxxxx> > Cc: Jason Gunthorpe <jgg@xxxxxxxx> > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: Jonathan Corbet <corbet@xxxxxxx> > Cc: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@xxxxxxxxxxxxxxx> > Cc: Lee Schermerhorn <lee.schermerhorn@xxxxxx> > Cc: Li Xinhai <lixinhai.lxh@xxxxxxxxx> > Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxxxx> > Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > Cc: Mina Almasry <almasrymina@xxxxxxxxxx> > Cc: Tejun Heo <tj@xxxxxxxxxx> > Cc: Vlastimil Babka <vbabka@xxxxxxx> > > Ben Widawsky (14): > mm/mempolicy: Add comment for missing LOCAL > mm/mempolicy: Use node_mem_id() instead of node_id() > mm/page_alloc: start plumbing multi preferred node > mm/page_alloc: add preferred pass to page allocation > mm: Finish handling MPOL_PREFERRED_MANY > mm: clean up alloc_pages_vma (thp) > mm: Extract THP hugepage allocation > mm/mempolicy: Use __alloc_page_node for interleaved > mm: kill __alloc_pages > mm/mempolicy: Introduce policy_preferred_nodes() > mm: convert callers of __alloc_pages_nodemask to pmask > alloc_pages_nodemask: turn preferred nid into a nodemask > mm: Use less stack for page allocations > mm/mempolicy: Advertise new MPOL_PREFERRED_MANY > > Dave Hansen (4): > mm/mempolicy: convert single preferred_node to full nodemask > mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes > mm/mempolicy: allow preferred code to take a nodemask > mm/mempolicy: refactor rebind code for PREFERRED_MANY > > .../admin-guide/mm/numa_memory_policy.rst | 22 +- > include/linux/gfp.h | 19 +- > include/linux/mempolicy.h | 4 +- > include/linux/migrate.h | 4 +- > include/linux/mmzone.h | 3 + > include/uapi/linux/mempolicy.h | 6 +- > mm/hugetlb.c | 10 +- > mm/internal.h | 1 + > mm/mempolicy.c | 271 +++++++++++++----- > mm/page_alloc.c | 179 +++++++++++- > 10 files changed, 403 insertions(+), 116 deletions(-) > > > -- > 2.27.0 -- Michal Hocko SUSE Labs