On Mon, Mar 15, 2021 at 05:20:14PM +0800, Muchun Song wrote: > --- a/arch/x86/mm/init_64.c > +++ b/arch/x86/mm/init_64.c > @@ -34,6 +34,7 @@ > #include <linux/gfp.h> > #include <linux/kcore.h> > #include <linux/bootmem_info.h> > +#include <linux/hugetlb.h> > > #include <asm/processor.h> > #include <asm/bios_ebda.h> > @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > { > int err; > > - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) > + if ((is_hugetlb_free_vmemmap_enabled() && !altmap) || > + end - start < PAGES_PER_SECTION * sizeof(struct page)) > err = vmemmap_populate_basepages(start, end, node, NULL); > else if (boot_cpu_has(X86_FEATURE_PSE)) > err = vmemmap_populate_hugepages(start, end, node, altmap); I've been thinking about this some more. Assume you opt-in the hugetlb-vmemmap feature, and assume you pass a valid altmap to vmemmap_populate. This will lead to use populating the vmemmap array with hugepages. What if then, a HugeTLB gets allocated and falls within that memory range (backed by hugetpages)? AFAIK, this will get us in trouble as currently the code can only operate on memory backed by PAGE_SIZE pages, right? I cannot remember, but I do not think nothing prevents that from happening? Am I missing anything? -- Oscar Salvador SUSE L3