Re: [PATCH] Reenable NUMA policy support in the slab allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 12, 2024 at 10:55 AM Christoph Lameter via B4 Relay
<devnull+cl.gentwo.org@xxxxxxxxxx> wrote:
>
> From: Christoph Lameter <cl@xxxxxxxxxx>
>
> Revert commit 8014c46ad991f05b15ffbc0c6ae130bdf911187b
> ("slub: use alloc_pages_node() in alloc_slab_page()").
>
> The patch disabled the numa policy support in the slab allocator. It
> did not consider that alloc_pages() uses memory policies but
> alloc_pages_node() does not.
>
> As a result of this patch slab memory allocations are no longer spread via
> interleave policy across all available NUMA nodes on bootup. Instead
> all slab memory is allocated close to the boot processor. This leads to
> an imbalance of memory accesses on NUMA systems.
>
> Also applications using MPOL_INTERLEAVE as a memory policy will no longer
> spread slab allocations over all nodes in the interleave set but allocate
> memory locally. This may also result in unbalanced allocations
> on a single node if f.e. a certain process does the memory allocation on
> behalf of all the other processes.
>
> SLUB does not apply memory policies to individual object allocations.
> However, it relies on the page allocators support of memory policies
> through alloc_pages() to do the NUMA memory allocations on a per
> folio or page level. SLUB also applies memory policies when retrieving
> partial allocated slab pages from the partial list.
>

Please add Fixes id. And should it be sent to stable?

The patch makes sense to me. Reviewed-by: Yang Shi <shy828301@xxxxxxxxx>

> Signed-off-by: Christoph Lameter <cl@xxxxxxxxxx>
> ---
>  mm/slub.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index c9d8a2497fd6..4dea3c7df5ad 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2318,7 +2318,11 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
>         struct slab *slab;
>         unsigned int order = oo_order(oo);
>
> -       folio = (struct folio *)alloc_pages_node(node, flags, order);
> +       if (node == NUMA_NO_NODE)
> +               folio = (struct folio *)alloc_pages(flags, order);
> +       else
> +               folio = (struct folio *)__alloc_pages_node(node, flags, order);
> +
>         if (!folio)
>                 return NULL;
>
>
> ---
> base-commit: d74da846046aeec9333e802f5918bd3261fb5509
> change-id: 20240806-numa_policy-5188f44ba0d8
>
> Best regards,
> --
> Christoph Lameter <cl@xxxxxxxxxx>
>
>
>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux