Re: [PATCH v2 1/2] arm64: mm: vmemmap populate to page level if not section aligned

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 24, 2024 at 05:32:06PM +0800, Zhenhua Huang wrote:
> Thanks Catalin for review!
> Merry Christmas.

Merry Christmas to you too!

> On 2024/12/21 2:30, Catalin Marinas wrote:
> > On Mon, Dec 09, 2024 at 05:42:26PM +0800, Zhenhua Huang wrote:
> > > Fixes: c1cc1552616d ("arm64: MMU initialisation")
> > 
> > I wouldn't add a fix for the first commit adding arm64 support, we did
> > not even have memory hotplug at the time (added later in 5.7 by commit
> > bbd6ec605c0f ("arm64/mm: Enable memory hot remove")). IIUC, this hasn't
> > been a problem until commit ba72b4c8cf60 ("mm/sparsemem: support
> > sub-section hotplug"). That commit broke some arm64 assumptions.
> 
> Shall we add ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> because it broke arm64 assumptions ?

Yes, I think that would be better. And a cc stable to 5.4 (the above
commit appeared in 5.3).

> > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > > index e2739b69e11b..fd59ee44960e 100644
> > > --- a/arch/arm64/mm/mmu.c
> > > +++ b/arch/arm64/mm/mmu.c
> > > @@ -1177,7 +1177,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
> > >   {
> > >   	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
> > > -	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
> > > +	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
> > > +	!IS_ALIGNED(page_to_pfn((struct page *)start), PAGES_PER_SECTION) ||
> > > +	!IS_ALIGNED(page_to_pfn((struct page *)end), PAGES_PER_SECTION))
> > >   		return vmemmap_populate_basepages(start, end, node, altmap);
> > >   	else
> > >   		return vmemmap_populate_hugepages(start, end, node, altmap);
> > 
> > An alternative would be to fix unmap_hotplug_pmd_range() etc. to avoid
> > nuking the whole vmemmap pmd section if it's not empty. Not sure how
> > easy that is, whether we have the necessary information (I haven't
> > looked in detail).
> > 
> > A potential issue - can we hotplug 128MB of RAM and only unplug 2MB? If
> > that's possible, the problem isn't solved by this patch.
> 
> Indeed, seems there is no guarantee that plug size must be equal to unplug
> size...
> 
> I have two ideas:
> 1. Completely disable this PMD mapping optimization since there is no
> guarantee we must align 128M memory for hotplug ..

I'd be in favour of this, at least if CONFIG_MEMORY_HOTPLUG is enabled.
I think the only advantage here is that we don't allocate a full 2MB
block for vmemmap when only plugging in a sub-section.

> 2. If we want to take this optimization.
> I propose adding one argument to vmemmap_free to indicate if the entire
> section is freed(based on subsection map). Vmemmap_free is a common function
> and might affect other architectures... The process would be:
> vmemmap_free
> 	unmap_hotplug_range //In unmap_hotplug_pmd_range() as you mentioned:if
> whole section is freed, proceed as usual. Otherwise, *just clear out struct
> page content but do not free*.
> 	free_empty_tables // will be called only if entire section is freed
> 
> On the populate side,
> else if (vmemmap_check_pmd(pmd, node, addr, next)) //implement this function
> 	continue;	//Buffer still exists, just abort..
> 
> Could you please comment further whether #2 is feasible ?

vmemmap_free() already gets start/end, so it could at least check the
alignment and avoid freeing if it's not unplugging a full section. It
does leave a 2MB vmemmap block in place when freeing the last subsection
but it's safer than freeing valid struct page entries. In addition, it
could query the memory hotplug state with something like
find_memory_block() and figure out whether the section is empty.

Anyway, I'll be off until the new year, maybe I get other ideas by then.

-- 
Catalin




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux