On Fri, Jun 17, 2022 at 07:46:53AM +0200, Oscar Salvador wrote: > On Thu, Jun 16, 2022 at 09:30:33AM +0200, David Hildenbrand wrote: > > IIRC, that was used to skip these patches on the offlining path before > > we provided the ranges to offline_pages(). > > Yeah, it was designed for that purpose back then. > > > I'd not mess with PG_reserved, and give them a clearer name, to not > > confuse them with other, ordinary, vmemmap pages that are not > > self-hosted (maybe in the future we might want to flag all vmemmap pages > > with a new type?). > > Not sure whether a new type is really needed, or to put it another way, I > cannot see the benefit. > > > > > I'd just try reusing the flag PG_owner_priv_1. And eventually, flag all > > (v)memmap pages with a type PG_memmap. However, the latter would be > > optional and might not be strictly required > > > > > > So what think could make sense is > > > > /* vmemmap pages that are self-hosted and cannot be optimized/freed. */ > > PG_vmemmap_self_hosted = PG_owner_priv_1, > > Sure, I just lightly tested the below, and seems to work, but not sure > whether that is what you are referring to. > @Munchun: thoughts? > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index e66f7aa3191d..a4556afd7bda 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -193,6 +193,11 @@ enum pageflags { > > /* Only valid for buddy pages. Used to track pages that are reported */ > PG_reported = PG_uptodate, > + > +#ifdef CONFIG_MEMORY_HOTPLUG > + /* For self-hosted memmap pages */ > + PG_vmemmap_self_hosted = PG_owner_priv_1, > +#endif > }; > > #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) > @@ -628,6 +633,10 @@ PAGEFLAG_FALSE(SkipKASanPoison, skip_kasan_poison) > */ > __PAGEFLAG(Reported, reported, PF_NO_COMPOUND) > > +#ifdef CONFIG_MEMORY_HOTPLUG > +PAGEFLAG(Vmemmap_self_hosted, vmemmap_self_hosted, PF_ANY) > +#endif > + > /* > * On an anonymous page mapped into a user virtual memory area, > * page->mapping points to its anon_vma, not to a struct address_space; > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index 1089ea8a9c98..e2de7ed27e9e 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -101,6 +101,14 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page *head) > { > unsigned long vmemmap_addr = (unsigned long)head; > unsigned long vmemmap_end, vmemmap_reuse, vmemmap_pages; > + struct mem_section *ms = __pfn_to_section(page_to_pfn(head)); Hi Oscar, After more thinkging, I think here should be: struct mem_section *ms = __pfn_to_section(ALIGN_DOWN(page_to_pfn(head), memory_block_size_bytes())); Why? [ hotplugged memory ] [ section ][...][ section ] [ vmemmap ][ usable memory ] ^ | | | +---+ | | ^ | | +--------+ | ^ | +-------------------------------------------+ The page_to_pfn(head) can falls onto the non-1st section, actually, we desire 1st section which ->section_mem_map is the start vmemmap of the vmemmap. If we align the page_to_pfn(head) with the start pfn of the hotplugged memory, then we can simplify the code further. unsigned long size = memory_block_size_bytes(); unsigned long pfn = ALIGN_DOWN(page_to_pfn(head), size); if (pfn_valid(pfn) && PageVmemmapSelfHosted(pfn_to_page(pfn))) return; Hotplugged memory block never has non-present sections, while boot memory block can have one or more. So pfn_valid() is used to filter out the first section if it is non-present. Hopefully I am not wrong. Thanks. > + struct page *memmap; > + > + memmap = sparse_decode_mem_map(ms->section_mem_map, > + pfn_to_section_nr(page_to_pfn(head))); > + > + if (PageVmemmap_self_hosted(memmap)) > + return; > > vmemmap_pages = hugetlb_optimize_vmemmap_pages(h); > if (!vmemmap_pages) > @@ -199,10 +207,10 @@ static struct ctl_table hugetlb_vmemmap_sysctls[] = { > static __init int hugetlb_vmemmap_sysctls_init(void) > { > /* > - * If "memory_hotplug.memmap_on_memory" is enabled or "struct page" > - * crosses page boundaries, the vmemmap pages cannot be optimized. > + * If "struct page" crosses page boundaries, the vmemmap pages cannot > + * be optimized. > */ > - if (!mhp_memmap_on_memory() && is_power_of_2(sizeof(struct page))) > + if (is_power_of_2(sizeof(struct page))) > register_sysctl_init("vm", hugetlb_vmemmap_sysctls); > > return 0; > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 1213d0c67a53..863966c2c6f1 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -45,8 +45,6 @@ > #ifdef CONFIG_MHP_MEMMAP_ON_MEMORY > static int memmap_on_memory_set(const char *val, const struct kernel_param *kp) > { > - if (hugetlb_optimize_vmemmap_enabled()) > - return 0; > return param_set_bool(val, kp); > } > > @@ -1032,6 +1030,7 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, > { > unsigned long end_pfn = pfn + nr_pages; > int ret; > + int i; > > ret = kasan_add_zero_shadow(__va(PFN_PHYS(pfn)), PFN_PHYS(nr_pages)); > if (ret) > @@ -1039,6 +1038,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, > > move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE); > > + /* > + * Let us flag self-hosted memmap > + */ > + for (i = 0; i < nr_pages; i++) > + SetPageVmemmap_self_hosted(pfn_to_page(pfn + i)); > + > /* > * It might be that the vmemmap_pages fully span sections. If that is > * the case, mark those sections online here as otherwise they will be > > > -- > Oscar Salvador > SUSE Labs >