Re: [PATCH v6 5/6] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 12-07-21 16:09:33, Feng Tang wrote:
> From: Ben Widawsky <ben.widawsky@xxxxxxxxx>
> 
> Adds a new mode to the existing mempolicy modes, MPOL_PREFERRED_MANY.
> 
> MPOL_PREFERRED_MANY will be adequately documented in the internal
> admin-guide with this patch. Eventually, the man pages for mbind(2),
> get_mempolicy(2), set_mempolicy(2) and numactl(8) will also have text
> about this mode. Those shall contain the canonical reference.
> 
> NUMA systems continue to become more prevalent. New technologies like
> PMEM make finer grain control over memory access patterns increasingly
> desirable. MPOL_PREFERRED_MANY allows userspace to specify a set of
> nodes that will be tried first when performing allocations. If those
> allocations fail, all remaining nodes will be tried. It's a straight
> forward API which solves many of the presumptive needs of system
> administrators wanting to optimize workloads on such machines. The mode
> will work either per VMA, or per thread.
> 
> Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widawsky@xxxxxxxxx
> Signed-off-by: Ben Widawsky <ben.widawsky@xxxxxxxxx>
> Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx>
> ---
>  Documentation/admin-guide/mm/numa_memory_policy.rst | 16 ++++++++++++----
>  mm/mempolicy.c                                      |  7 +------
>  2 files changed, 13 insertions(+), 10 deletions(-)
> 
> diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst
> index 067a90a1499c..cd653561e531 100644
> --- a/Documentation/admin-guide/mm/numa_memory_policy.rst
> +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst
> @@ -245,6 +245,14 @@ MPOL_INTERLEAVED
>  	address range or file.  During system boot up, the temporary
>  	interleaved system default policy works in this mode.
>  
> +MPOL_PREFERRED_MANY
> +        This mode specifies that the allocation should be attempted from the
> +        nodemask specified in the policy. If that allocation fails, the kernel
> +        will search other nodes, in order of increasing distance from the first
> +        set bit in the nodemask based on information provided by the platform
> +        firmware. It is similar to MPOL_PREFERRED with the main exception that
> +        is an error to have an empty nodemask.

I believe the target audience of this documents are users rather than
kernel developers and for those the wording might be rather cryptic. I
would rephrase like this
	This mode specifices that the allocation should be preferrably
	satisfied from the nodemask specified in the policy. If there is
	a memory pressure on all nodes in the nodemask the allocation
	can fall back to all existing numa nodes. This is effectively
	MPOL_PREFERRED allowed for a mask rather than a single node.

With that or similar feel free to add
Acked-by: Michal Hocko <mhocko@xxxxxxxx>
-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux