Re: [PATCH] mm/sparse: set SECTION_NID_SHIFT to 6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sounds nice to me, so here's a patch. Could you review this?


Hi,

sorry for the late reply, I was on vacation. Please send it as a proper stand-alone patch next time, such that it

1. won't get silently ignored by reviewers/maintainers within a thread
2. Can easily get picked up/tested

Some minor comments below.

Thanks,
Naoya Horiguchi
---
 From a146c9f12ae8985c8985a5861330f7528cd14fe8 Mon Sep 17 00:00:00 2001
From: Naoya Horiguchi <naoya.horiguchi@xxxxxxx>
Date: Mon, 28 Jun 2021 15:50:37 +0900
Subject: [PATCH] mm/sparse: set SECTION_NID_SHIFT to 6

Hagio-san reported that crash utility can see bit 4 in section_mem_map
(SECTION_TAINT_ZONE_DEVICE) to be set, even if we do not use any
ZONE_DEVICE ilke pmem or HMM.  This problem could break crash-related

s/ilke/like/

toolsets and/or other memory analysis tools.


I'd rephrase this to "Having SECTION_TAINT_ZONE_DEVICE set for wrong sections forces pfn_to_online_page() through the slow path, but doesn't actually break the kernel. However, it can break crash-related toolsets."

However, I am not sure why it actually breaks crash? crash would have to implement the same slow-path check and would have to double-check the sub-section present map. Then, it should just work like pfn_to_online_page() and not have a real issue. What am I missing?

The root cause is that SECTION_NID_SHIFT is incorrectly set to 3,
while we use lower 5 bits for SECTION_* flags.  So bit 3 and 4 can be
overlapped by sub-field for early NID, and bit 4 is unexpectedly set
on (for example) NUMA node id is 2 or 3.

To fix it, set SECTION_NID_SHIFT to 6 which is the minimum number of
available bits of section flag field.

[1]: https://github.com/crash-utility/crash/commit/0b5435e10161345cf713ed447a155a611a1b408b

[1] is never referenced


Fixes: 1f90a3477df3 ("mm: teach pfn_to_online_page() about ZONE_DEVICE section collisions")
Cc: stable@xxxxxxxxxxxxxxx # v5.12+

^ I am not really convinced that this is a stable fix. It forces something through the slow path, but the kernel itself is not broken, no?

Reported-by: Kazuhito Hagio <k-hagio-ab@xxxxxxx>
Suggested-by: Dan Williams <dan.j.williams@xxxxxxxxx>
Signed-off-by: Naoya Horiguchi <naoya.horiguchi@xxxxxxx>
---
  include/linux/mmzone.h | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fcb535560028..d6aa2a196aeb 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1357,6 +1357,7 @@ extern size_t mem_section_usage_size(void);
   *      worst combination is powerpc with 256k pages,
   *      which results in PFN_SECTION_SHIFT equal 6.
   * To sum it up, at least 6 bits are available.
+ * SECTION_NID_SHIFT is set to 6 based on this fact.

I'd drop that comment or rephrase to ("once this changes, don't forget to adjust SECTION_NID_SHIFT")

   */
  #define SECTION_MARKED_PRESENT		(1UL<<0)
  #define SECTION_HAS_MEM_MAP		(1UL<<1)
@@ -1365,7 +1366,7 @@ extern size_t mem_section_usage_size(void);
  #define SECTION_TAINT_ZONE_DEVICE	(1UL<<4)
  #define SECTION_MAP_LAST_BIT		(1UL<<5)
  #define SECTION_MAP_MASK		(~(SECTION_MAP_LAST_BIT-1))
-#define SECTION_NID_SHIFT		3
+#define SECTION_NID_SHIFT		6
static inline struct page *__section_mem_map_addr(struct mem_section *section)
  {


Change itself looks correct to me.

Acked-by: David Hildenbrand <david@xxxxxxxxxx>

--
Thanks,

David / dhildenb






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux