On Mon 22-06-20 09:10:00, Michal Hocko wrote: [...] > > The goal of the new mode is to enable some use-cases when using tiered memory > > usage models which I've lovingly named. > > 1a. The Hare - The interconnect is fast enough to meet bandwidth and latency > > requirements allowing preference to be given to all nodes with "fast" memory. > > 1b. The Indiscriminate Hare - An application knows it wants fast memory (or > > perhaps slow memory), but doesn't care which node it runs on. The application > > can prefer a set of nodes and then xpu bind to the local node (cpu, accelerator, > > etc). This reverses the nodes are chosen today where the kernel attempts to use > > local memory to the CPU whenever possible. This will attempt to use the local > > accelerator to the memory. > > 2. The Tortoise - The administrator (or the application itself) is aware it only > > needs slow memory, and so can prefer that. > > > > Much of this is almost achievable with the bind interface, but the bind > > interface suffers from an inability to fallback to another set of nodes if > > binding fails to all nodes in the nodemask. Yes, and probably worth mentioning explicitly that this might lead to the OOM killer invocation so a failure would be disruptive to any workload which is allowed to allocate from the specific node mask (so even tasks without any mempolicy). > > Like MPOL_BIND a nodemask is given. Inherently this removes ordering from the > > preference. > > > > > /* Set first two nodes as preferred in an 8 node system. */ > > > const unsigned long nodes = 0x3 > > > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8); > > > > > /* Mimic interleave policy, but have fallback *. > > > const unsigned long nodes = 0xaa > > > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8); > > > > Some internal discussion took place around the interface. There are two > > alternatives which we have discussed, plus one I stuck in: > > 1. Ordered list of nodes. Currently it's believed that the added complexity is > > nod needed for expected usecases. There is no ordering in MPOL_BIND either and even though numa apis tend to be screwed up from multiple aspects this is not a problem I have ever stumbled over. > > 2. A flag for bind to allow falling back to other nodes. This confuses the > > notion of binding and is less flexible than the current solution. Agreed. > > 3. Create flags or new modes that helps with some ordering. This offers both a > > friendlier API as well as a solution for more customized usage. It's unknown > > if it's worth the complexity to support this. Here is sample code for how > > this might work: > > > > > // Default > > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_SOCKET, NULL, 0); > > > // which is the same as > > > set_mempolicy(MPOL_DEFAULT, NULL, 0); OK > > > // The Hare > > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, NULL, 0); > > > > > > // The Tortoise > > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_REV, NULL, 0); > > > > > > // Prefer the fast memory of the first two sockets > > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, -1, 2); > > > > > > // Prefer specific nodes for some something wacky > > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_CUSTOM, 0x17c, 1024); I am not so sure about these though. It would be much more easier to start without additional modifiers and provide MPOL_PREFER_MANY without any additional restrictions first (btw. I would like MPOL_PREFER_MASK more but I do understand that naming is not the top priority now). It would be also great to provide a high level semantic description here. I have very quickly glanced through patches and they are not really trivial to follow with many incremental steps so the higher level intention is lost easily. Do I get it right that the default semantic is essentially - allocate page from the given nodemask (with __GFP_RETRY_MAYFAIL semantic) - fallback to numa unrestricted allocation with the default numa policy on the failure Or are there any usecases to modify how hard to keep the preference over the fallback? -- Michal Hocko SUSE Labs