On Thu, Jun 16, 2022 at 09:21:35AM +0200, David Hildenbrand wrote: > On 16.06.22 04:45, Muchun Song wrote: > > On Wed, Jun 15, 2022 at 11:51:49AM +0200, David Hildenbrand wrote: > >> On 20.05.22 04:55, Muchun Song wrote: > >>> For now, the feature of hugetlb_free_vmemmap is not compatible with the > >>> feature of memory_hotplug.memmap_on_memory, and hugetlb_free_vmemmap > >>> takes precedence over memory_hotplug.memmap_on_memory. However, someone > >>> wants to make memory_hotplug.memmap_on_memory takes precedence over > >>> hugetlb_free_vmemmap since memmap_on_memory makes it more likely to > >>> succeed memory hotplug in close-to-OOM situations. So the decision > >>> of making hugetlb_free_vmemmap take precedence is not wise and elegant. > >>> The proper approach is to have hugetlb_vmemmap.c do the check whether > >>> the section which the HugeTLB pages belong to can be optimized. If > >>> the section's vmemmap pages are allocated from the added memory block > >>> itself, hugetlb_free_vmemmap should refuse to optimize the vmemmap, > >>> otherwise, do the optimization. Then both kernel parameters are > >>> compatible. So this patch introduces SECTION_CANNOT_OPTIMIZE_VMEMMAP > >>> to indicate whether the section could be optimized. > >>> > >> > >> In theory, we have that information stored in the relevant memory block, > >> but I assume that lookup in the xarray + locking is impractical. > >> > >> I wonder if we can derive that information simply from the vmemmap pages > >> themselves, because *drumroll* > >> > >> For one vmemmap page (the first one), the vmemmap corresponds to itself > >> -- what?! > >> > >> > >> [ hotplugged memory ] > >> [ memmap ][ usable memory ] > >> | | | > >> ^--- | | > >> ^------- | > >> ^---------------------- > >> > >> The memmap of the first page of hotplugged memory falls onto itself. > >> We'd have to derive from actual "usable memory" that condition. > >> > >> > >> We currently support memmap_on_memory memory only within fixed-size > >> memory blocks. So "hotplugged memory" is guaranteed to be aligned to > >> memory_block_size_bytes() and the size is memory_block_size_bytes(). > >> > >> If we'd have a page falling into usbale memory, we'd simply lookup the > >> first page and test if the vmemmap maps to itself. > >> > > > > I think this can work. Should we use this approach in next version? > > > > Either that or more preferable, flagging the vmemmap pages eventually. > That's might be future proof. > All right. I think we can go with the above approach, we can improve it to flagging-base approach in the future if needed. > >> > >> Of course, once we'd support variable-sized memory blocks, it would be > >> different. > >> > >> > >> An easier/future-proof approach might simply be flagging the vmemmap > >> pages as being special. We reuse page flags for that, which don't have > >> semantics yet (i.e., PG_reserved indicates a boot-time allocation via > >> memblock). > >> > > > > I think you mean flag vmemmap pages' struct page as PG_reserved if it > > can be optimized, right? When the vmemmap pages are allocated in > > hugetlb_vmemmap_alloc(), is it valid to flag them as PG_reserved (they > > are allocated from buddy allocator not memblock)? > > > > Sorry I wasn't clear. I'd flag them with some other > not-yet-used-for-vmemmap-pages flag. Reusing PG_reserved could result in > trouble. > Sorry. I thought you suggest reusing "PG_reserved". My bad misreading. Thanks.