On Tue, 16 Nov 2021 12:12:37 +0530 "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx> wrote: > This syscall can be used to set a home node for the MPOL_BIND > and MPOL_PREFERRED_MANY memory policy. Users should use this > syscall after setting up a memory policy for the specified range > as shown below. > > mbind(p, nr_pages * page_size, MPOL_BIND, new_nodes->maskp, > new_nodes->size + 1, 0); > sys_set_mempolicy_home_node((unsigned long)p, nr_pages * page_size, > home_node, 0); > > The syscall allows specifying a home node/preferred node from which kernel > will fulfill memory allocation requests first. > > For address range with MPOL_BIND memory policy, if nodemask specifies more > than one node, page allocations will come from the node in the nodemask > with sufficient free memory that is closest to the home node/preferred node. > > For MPOL_PREFERRED_MANY if the nodemask specifies more than one node, > page allocation will come from the node in the nodemask with sufficient > free memory that is closest to the home node/preferred node. If there is > not enough memory in all the nodes specified in the nodemask, the allocation > will be attempted from the closest numa node to the home node in the system. > > This helps applications to hint at a memory allocation preference node > and fallback to _only_ a set of nodes if the memory is not available > on the preferred node. Fallback allocation is attempted from the node which is > nearest to the preferred node. > > This helps applications to have control on memory allocation numa nodes and > avoids default fallback to slow memory NUMA nodes. For example a system with > NUMA nodes 1,2 and 3 with DRAM memory and 10, 11 and 12 of slow memory > > new_nodes = numa_bitmask_alloc(nr_nodes); > > numa_bitmask_setbit(new_nodes, 1); > numa_bitmask_setbit(new_nodes, 2); > numa_bitmask_setbit(new_nodes, 3); > > p = mmap(NULL, nr_pages * page_size, protflag, mapflag, -1, 0); > mbind(p, nr_pages * page_size, MPOL_BIND, new_nodes->maskp, new_nodes->size + 1, 0); > > sys_set_mempolicy_home_node(p, nr_pages * page_size, 2, 0); > > This will allocate from nodes closer to node 2 and will make sure kernel will > only allocate from nodes 1, 2 and3. Memory will not be allocated from slow memory > nodes 10, 11 and 12 > > With MPOL_PREFERRED_MANY on the other hand will first try to allocate from the > closest node to node 2 from the node list 1, 2 and 3. If those nodes don't have > enough memory, kernel will allocate from slow memory node 10, 11 and 12 which > ever is closer to node 2. > > ... > > @@ -1477,6 +1478,60 @@ static long kernel_mbind(unsigned long start, unsigned long len, > return do_mbind(start, len, lmode, mode_flags, &nodes, flags); > } > > +SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, len, > + unsigned long, home_node, unsigned long, flags) > +{ > + struct mm_struct *mm = current->mm; > + struct vm_area_struct *vma; > + struct mempolicy *new; > + unsigned long vmstart; > + unsigned long vmend; > + unsigned long end; > + int err = -ENOENT; > + > + if (start & ~PAGE_MASK) > + return -EINVAL; > + /* > + * flags is used for future extension if any. > + */ > + if (flags != 0) > + return -EINVAL; > + > + if (!node_online(home_node)) > + return -EINVAL; What's the thinking here? The node can later be offlined and the kernel takes no action to reset home nodes, so why not permit setting a presently-offline node as the home node? Checking here seems rather arbitrary?