Re: [PATCH] mm: page_alloc: don't allocate page from memoryless nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15.02.23 11:04, Mike Rapoport wrote:
On Wed, Feb 15, 2023 at 10:43:58AM +0100, David Hildenbrand wrote:
On 15.02.23 10:30, Mike Rapoport wrote:
On Tue, Feb 14, 2023 at 02:38:44PM +0100, Michal Hocko wrote:
On Tue 14-02-23 12:58:39, David Hildenbrand wrote:
On 14.02.23 12:48, David Hildenbrand wrote:
On 14.02.23 12:44, Mike Rapoport wrote:
(added x86 folks)

On Tue, Feb 14, 2023 at 12:29:42PM +0100, David Hildenbrand wrote:
On 14.02.23 12:26, Qi Zheng wrote:
On 2023/2/14 19:22, David Hildenbrand wrote:

TBH, this is the first time I hear of NODE_MIN_SIZE and it seems to be a
pretty x86 specific thing.

Are we sure we want to get NODE_MIN_SIZE involved?

Maybe add an arch_xxx() to handle it?

I still haven't figured out what we want to achieve with NODE_MIN_SIZE at
all. It smells like an arch-specific hack looking at

"Don't confuse VM with a node that doesn't have the minimum amount of
memory"

Why shouldn't mm-core deal with that?

Well, a node with <4M RAM is not very useful and bears all the overhead of
an extra live node.

And totally not with 4.1M, haha.

I really like the "Might fix boot" in the commit description.


But, hey, why won't we just drop that '< NODE_MIN_SIZE' and let people with
weird HW configurations just live with this?


;)


Actually, remembering 09f49dca570a ("mm: handle uninitialized numa nodes
gracefully"), this might be the right thing to do. That commit assumes that
all offline nodes would get the pgdat allocated in free_area_init(). So that
we end up with an allocated pgdat for all possible nodes. The reasoning IIRC
was that we don't care about wasting memory in weird VM setups.

Yes, that is the case indeed. I suspect the NODE_MIN_SIZE is a relict of
the past when some PXM entries were incorrect or fishy. I would just
drop the check and see whether something breaks. Or make those involved
back then remember whether this is addressing something that is relevant
these days. Even 5MB node makes (as the memmap is allocated for the
whole memory section anyway and that is 128MB) a very little sense if you ask me.

How about we try this:

  From b670120bcacd3fe34a40d7179c70ca2ab69279e0 Mon Sep 17 00:00:00 2001
From: "Mike Rapoport (IBM)" <rppt@xxxxxxxxxx>
Date: Wed, 15 Feb 2023 11:12:18 +0200
Subject: [PATCH] x86/mm: drop 4MB restriction on minimal NUMA node size

Qi Zheng reports crashes in a production environment and provides a
simplified example as a reproducer:

    For example, if we use qemu to start a two NUMA node kernel,
    one of the nodes has 2M memory (less than NODE_MIN_SIZE),
    and the other node has 2G, then we will encounter the
    following panic:

    [    0.149844] BUG: kernel NULL pointer dereference, address: 0000000000000000
    [    0.150783] #PF: supervisor write access in kernel mode
    [    0.151488] #PF: error_code(0x0002) - not-present page
    <...>
    [    0.156056] RIP: 0010:_raw_spin_lock_irqsave+0x22/0x40
    <...>
    [    0.169781] Call Trace:
    [    0.170159]  <TASK>
    [    0.170448]  deactivate_slab+0x187/0x3c0
    [    0.171031]  ? bootstrap+0x1b/0x10e
    [    0.171559]  ? preempt_count_sub+0x9/0xa0
    [    0.172145]  ? kmem_cache_alloc+0x12c/0x440
    [    0.172735]  ? bootstrap+0x1b/0x10e
    [    0.173236]  bootstrap+0x6b/0x10e
    [    0.173720]  kmem_cache_init+0x10a/0x188
    [    0.174240]  start_kernel+0x415/0x6ac
    [    0.174738]  secondary_startup_64_no_verify+0xe0/0xeb
    [    0.175417]  </TASK>
    [    0.175713] Modules linked in:
    [    0.176117] CR2: 0000000000000000

The crashes happen because of inconsistency between nodemask that has
nodes with less than 4MB as memoryless and the actual memory fed into
core mm.

The commit 9391a3f9c7f1 ("[PATCH] x86_64: Clear more state when ignoring
empty node in SRAT parsing") that introduced minimal size of a NUMA node
does not explain why a node size cannot be less than 4MB and what boot
failures this restriction might fix.

Since then a lot has changed and core mm won't confuse badly about small
node sizes.

Drop the limitation for the minimal node size.

Link: https://lore.kernel.org/all/20230212110305.93670-1-zhengqi.arch@xxxxxxxxxxxxx/
Signed-off-by: Mike Rapoport (IBM) <rppt@xxxxxxxxxx>
---
   arch/x86/include/asm/numa.h | 7 -------
   arch/x86/mm/numa.c          | 7 -------
   2 files changed, 14 deletions(-)

diff --git a/arch/x86/include/asm/numa.h b/arch/x86/include/asm/numa.h
index e3bae2b60a0d..ef2844d69173 100644
--- a/arch/x86/include/asm/numa.h
+++ b/arch/x86/include/asm/numa.h
@@ -12,13 +12,6 @@
   #define NR_NODE_MEMBLKS		(MAX_NUMNODES*2)
-/*
- * Too small node sizes may confuse the VM badly. Usually they
- * result from BIOS bugs. So dont recognize nodes as standalone
- * NUMA entities that have less than this amount of RAM listed:
- */
-#define NODE_MIN_SIZE (4*1024*1024)
-
   extern int numa_off;
   /*
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 2aadb2019b4f..55e3d895f15c 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -601,13 +601,6 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
   		if (start >= end)
   			continue;
-		/*
-		 * Don't confuse VM with a node that doesn't have the
-		 * minimum amount of memory:
-		 */
-		if (end && (end - start) < NODE_MIN_SIZE)
-			continue;
-
   		alloc_node_data(nid);
   	}

Hopefully it fixes the issue.

Acked-by: David Hildenbrand <david@xxxxxxxxxx>


The 4 MiB looks like the magical MAX_ORDER (and/or pageblock) thingy to me.
I recall that there were issues in the past when memory exposed to the buddy
would only be partially covering a pageblock. IIRC, memblock should already
take care to not expose memory to the buddy that is not aligned to MAX_ORDER
boundaries -- correct?

I don't remember those issues, but memblock does not align memory freed to
the buddy.

Still, this 4MB looks like a really old magic that was way before memblock
and even SPARSEMEM was experimental back then.

It's possible that the issues were with DISCONTIGMEM or with bootmem.

I recall where I stumbled over that:

commit 3c5f2eb9695cd241c9898a01388b19a149d0b7d2
Author: Heiko Carstens <hca@xxxxxxxxxxxxx>
Date:   Tue Jul 14 07:46:40 2020 +0200

    s390/mm: avoid trimming to MAX_ORDER

    Trimming to MAX_ORDER was originally done in order to avoid to set
    HOLES_IN_ZONE, which in turn would enable a quite expensive
    pfn_valid() check. pfn_valid() however only checks if a struct page
    exists for a given pfn.

    With sparsemen vmemmap there are always struct pages, since memmaps
    are allocated for whole sections. Therefore remove the HOLES_IN_ZONE
    comment and the trimming.

    Signed-off-by: Heiko Carstens <hca@xxxxxxxxxxxxx>

So indeed, it might just be a legacy leftover and with SPARSEMEM it should all be fine.

--
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux