On Fri, Nov 06, 2020 at 12:08:22AM +0800, Muchun Song wrote: > > I do not think you need this. > > We already have hugepages_supported(). > > Maybe some architectures support hugepage, but the vmemmap do not > use the hugepage map. In this case, we need it. But I am not sure if it > exists in the real world. At least, x86 can reuse hugepages_supported. Yes, but that is the point. IIUC, this patchset will enable HugeTLB vmemmap pages only for x86_64. Then, let us make the patchset specific to that architecture. If at some point this grows more users (powerpc, arm, ...), then we can add the missing code, but for now it makes sense to only include the bits to make this work on x86_64. And also according to this the changelog is a bit "misleading". "On some architectures, the vmemmap areas use huge page mapping. If we want to free the unused vmemmap pages, we have to split the huge pmd firstly. So we should pre-allocate pgtable to split huge pmd." On x86_64, vmemmap is always PMD mapped if the machine has hugepages support and if we have 2MB contiguos pages and PMD aligned. e.g: I have seen cases where after the system has ran for a period of time hotplug operations were mapping the vmemmap representing the hot-added range on page base, because we could not find enough contiguos and aligned memory. Something that [1] tries to solve: [1] https://patchwork.kernel.org/project/linux-mm/cover/20201022125835.26396-1-osalvador@xxxxxxx/ But anyway, my point is that let us make it clear in the changelog that this is aimed for x86_64 at the moment. Saying "on some architures" might make think people that this is not x86_64 specific. >> > > + vmemmap_pgtable_init(page); > > > > Maybe just open code this one? > > Sorry. I don't quite understand what it means. Could you explain? I meant doing page_huge_pte(page) = NULL But no strong feelings. -- Oscar Salvador SUSE L3