Re: [PATCH v4] arm64: mm: Populate vmemmap/linear at the page level for hotplugged sections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Catalin,

On 2025/1/8 3:22, Catalin Marinas wrote:
On Tue, Jan 07, 2025 at 03:42:52PM +0800, Zhenhua Huang wrote:
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index e2739b69e11b..5e0f514de870 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -42,9 +42,11 @@
  #include <asm/pgalloc.h>
  #include <asm/kfence.h>
-#define NO_BLOCK_MAPPINGS BIT(0)
-#define NO_CONT_MAPPINGS	BIT(1)
-#define NO_EXEC_MAPPINGS	BIT(2)	/* assumes FEAT_HPDS is not used */
+#define NO_PMD_BLOCK_MAPPINGS	BIT(0)
+#define NO_PUD_BLOCK_MAPPINGS	BIT(1)  /* Hotplug case: do not want block mapping for PUD */
+#define NO_BLOCK_MAPPINGS (NO_PMD_BLOCK_MAPPINGS | NO_PUD_BLOCK_MAPPINGS)

Nit: please use a tab instead of space before (NO_PMD_...)

+#define NO_CONT_MAPPINGS	BIT(2)
+#define NO_EXEC_MAPPINGS	BIT(3)	/* assumes FEAT_HPDS is not used */
u64 kimage_voffset __ro_after_init;
  EXPORT_SYMBOL(kimage_voffset);
@@ -254,7 +256,7 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
/* try section mapping first */
  		if (((addr | next | phys) & ~PMD_MASK) == 0 &&
-		    (flags & NO_BLOCK_MAPPINGS) == 0) {
+		    (flags & NO_PMD_BLOCK_MAPPINGS) == 0) {
  			pmd_set_huge(pmdp, phys, prot);
/*
@@ -356,10 +358,11 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
/*
  		 * For 4K granule only, attempt to put down a 1GB block
+		 * Hotplug case: do not attempt 1GB block
  		 */

I don't think we need this comment added here. The hotplug case is a
decision of the caller, so better to have the comment there.

Yeah, will remove.


  		if (pud_sect_supported() &&
  		   ((addr | next | phys) & ~PUD_MASK) == 0 &&
-		    (flags & NO_BLOCK_MAPPINGS) == 0) {
+		   (flags & NO_PUD_BLOCK_MAPPINGS) == 0) {
  			pud_set_huge(pudp, phys, prot);

Nit: something wrong with the alignment here. I think the unmodified
line after the 'if' one above was misaligned before your patch.

Noted and will correct in next patch.


/*
@@ -1175,9 +1178,21 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
  int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
  		struct vmem_altmap *altmap)
  {
+	unsigned long start_pfn;
+	struct mem_section *ms;
+
  	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
+	start_pfn = page_to_pfn((struct page *)start);
+	ms = __pfn_to_section(start_pfn);

Hmm, it would have been better if the core code provided the start pfn
as it does for vmemmap_populate_compound_pages() but I'm fine with
deducting it from 'start'.

I found another bug, that even for early section, when vmemmap_populate is called, SECTION_IS_EARLY is not set. Therefore, early_section() always return false.

Since vmemmap_populate() occurs during section initialization, it may be hard to say it is a bug.. However, should we instead using SECTION_MARKED_PRESENT to check? I tested well in my setup.

Hot plug flow:
1. section_activate -> vmemmap_populate
2. mark PRESENT

In contrast, the early flow:
1. memblocks_present -> mark PRESENT
2. __populate_section_memmap -> vmemmap_populate


+	/*
+	 * Hotplugged section does not support hugepages as
+	 * PMD_SIZE (hence PUD_SIZE) section mapping covers
+	 * struct page range that exceeds a SUBSECTION_SIZE
+	 * i.e 2MB - for all available base page sizes.
+	 */
+	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || !early_section(ms))
  		return vmemmap_populate_basepages(start, end, node, altmap);
  	else
  		return vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1339,9 +1354,25 @@ int arch_add_memory(int nid, u64 start, u64 size,
  		    struct mhp_params *params)
  {
  	int ret, flags = NO_EXEC_MAPPINGS;
+	unsigned long start_pfn = page_to_pfn((struct page *)start);
+	struct mem_section *ms = __pfn_to_section(start_pfn);

This looks wrong. 'start' here is a physical address, you want
PFN_DOWN() instead.

Sorry, my mistake.Thanks for catching it.


VM_BUG_ON(!mhp_range_allowed(start, size, true)); + /* should not be invoked by early section */
+	WARN_ON(early_section(ms));
+
+	/*
+	 * 4K base page's PMD_SIZE matches SUBSECTION_SIZE i.e 2MB. Hence
+	 * PMD section mapping can be allowed, but only for 4K base pages.
+	 * Where as PMD_SIZE (hence PUD_SIZE) for other page sizes exceed
+	 * SUBSECTION_SIZE.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_4K_PAGES))
+		flags |= NO_PUD_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;

In theory we can allow contiguous PTE mappings but not PMD. You could
probably do the same as a NO_BLOCK_MAPPINGS and split it into multiple
components - NO_PTE_CONT_MAPPINGS and so on.

+	else
+		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;

Similarly with 16K/64K pages we can allow contiguous PTEs as they all go
up to 2MB blocks.

Yes!


I think we should write the flags setup in a more readable way than
trying to do mental maths on the possible combinations, something like:

	flags = NO_PUD_BLOCK_MAPPINGS | NO_PMD_CONT_MAPPINGS;
	if (SUBSECTION_SHIFT < PMD_SHIFT)
		flags |= NO_PMD_BLOCK_MAPPINGS;
	if (SUBSECTION_SHIFT < CONT_PTE_SHIFT)
		flags |= NO_PTE_CONT_MAPPINGS;

Good idea indeed. We no longer need to worry about PAGE SIZE CONFIG.


This way we don't care about the page size and should cover any changes
to SUBSECTION_SHIFT making it smaller than 2MB.






[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux