On Tue, Mar 29, 2022 at 7:44 PM Barry Song <21cnbao@xxxxxxxxx> wrote: > > On Tue, Mar 29, 2022 at 5:57 PM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote: > > > > The feature of minimizing overhead of struct page associated with each > > HugeTLB page aims to free its vmemmap pages (used as struct page) to > > save memory, where is ~14GB/16GB per 1TB HugeTLB pages (2MB/1GB type). > > In short, when a HugeTLB page is allocated or freed, the vmemmap array > > representing the range associated with the page will need to be remapped. > > When a page is allocated, vmemmap pages are freed after remapping. > > When a page is freed, previously discarded vmemmap pages must be > > allocated before remapping. More implementations and details can be > > found here [1]. > > > > The preparation of freeing vmemmap pages associated with each HugeTLB > > page is ready, so we can support this feature for arm64 now. The > > flush_dcache_page() need to be adapted to operate on the head page's > > flags since the tail vmemmap pages are mapped with read-only after > > the feature is enabled (clear operation is not permitted). > > > > There was some discussions about this in the thread [2], but there was > > no conclusion in the end. And I copied the concern proposed by Anshuman > > to here. > > > > 1st concern: > > ''' > > But what happens when a hot remove section's vmemmap area (which is > > being teared down) is nearby another vmemmap area which is either created > > or being destroyed for HugeTLB alloc/free purpose. As you mentioned > > HugeTLB pages inside the hot remove section might be safe. But what about > > other HugeTLB areas whose vmemmap area shares page table entries with > > vmemmap entries for a section being hot removed ? Massive HugeTLB alloc > > /use/free test cycle using memory just adjacent to a memory hotplug area, > > which is always added and removed periodically, should be able to expose > > this problem. > > ''' > > > > Answer: At the time memory is removed, all HugeTLB pages either have been > > migrated away or dissolved. So there is no race between memory hot remove > > and free_huge_page_vmemmap(). Therefore, HugeTLB pages inside the hot > > remove section is safe. Let's talk your question "what about other > > HugeTLB areas whose vmemmap area shares page table entries with vmemmap > > entries for a section being hot removed ?", the question is not > > established. The minimal granularity size of hotplug memory 128MB (on > > arm64, 4k base page), any HugeTLB smaller than 128MB is within a section, > > then, there is no share PTE page tables between HugeTLB in this section > > and ones in other sections and a HugeTLB page could not cross two > > sections. In this case, the section cannot be freed. Any HugeTLB bigger > > than 128MB (section size) whose vmemmap pages is an integer multiple of > > 2MB (PMD-mapped). As long as: > > > > 1) HugeTLBs are naturally aligned, power-of-two sizes > > 2) The HugeTLB size >= the section size > > 3) The HugeTLB size >= the vmemmap leaf mapping size > > > > Then a HugeTLB will not share any leaf page table entries with *anything > > else*, but will share intermediate entries. In this case, at the time memory > > is removed, all HugeTLB pages either have been migrated away or dissolved. > > So there is also no race between memory hot remove and > > free_huge_page_vmemmap(). > > > > 2nd concern: > > ''' > > differently, not sure if ptdump would require any synchronization. > > > > Dumping an wrong value is probably okay but crashing because a page table > > entry is being freed after ptdump acquired the pointer is bad. On arm64, > > ptdump() is protected against hotremove via [get|put]_online_mems(). > > ''' > > > > Answer: The ptdump should be fine since vmemmap_remap_free() only exchanges > > PTEs or split the PMD entry (which means allocating a PTE page table). Both > > operations do not free any page tables (PTE), so ptdump cannot run into a > > UAF on any page tables. The wrost case is just dumping an wrong value. > > > > [1] https://lore.kernel.org/all/20210510030027.56044-1-songmuchun@xxxxxxxxxxxxx/ > > [2] https://lore.kernel.org/all/20210518091826.36937-1-songmuchun@xxxxxxxxxxxxx/ > > > > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > > --- > > Changes in v2: > > - Update commit message (Mark Rutland). > > - Fix flush_dcache_page(). > > > > arch/arm64/mm/flush.c | 14 ++++++++++++++ > > fs/Kconfig | 2 +- > > 2 files changed, 15 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c > > index a06c6ac770d4..705484a9b9df 100644 > > --- a/arch/arm64/mm/flush.c > > +++ b/arch/arm64/mm/flush.c > > @@ -75,6 +75,20 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); > > */ > > void flush_dcache_page(struct page *page) > > { > > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP > > + /* > > + * Only the head page's flags of HugeTLB can be cleared since the tail > > + * vmemmap pages associated with each HugeTLB page are mapped with > > + * read-only when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is enabled (more > > + * details can refer to vmemmap_remap_pte()). Although > > + * __sync_icache_dcache() only set PG_dcache_clean flag on the head > > + * page struct, some tail page structs still can see the flag since > > + * the head vmemmap page frame is reused (more details can refer to > > + * the comments above page_fixed_fake_head()). > > Is this still true if hugetlb_free_vmemmap_enabled() is false? No. Do you think it is better to add hugetlb_free_vmemmap_enabled() into the if block? Something like the following? + if (hugetlb_free_vmemmap_enabled() && PageHuge(page)) + page = compound_head(page); > > btw, the subject is a bit confusing as it seems it is not bringing up > HUGETLB_PAGE_FREE_VMEMMAP and it seems the feature > has been already there, but we are lacking some fixes for some > functions to make it work. Right. > could we explain this clear in commit > log? maybe we need a better subject for the commit as well. Will do. Thanks.