On 08/30/23 16:33, Muchun Song wrote: > > > On 2023/8/26 03:04, Mike Kravetz wrote: > > When removing hugetlb pages from the pool, we first create a list > > of removed pages and then free those pages back to low level allocators. > > Part of the 'freeing process' is to restore vmemmap for all base pages > > if necessary. Pass this list of pages to a new routine > > hugetlb_vmemmap_restore_folios() so that vmemmap restoration can be > > performed in bulk. > > > > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > > --- > > mm/hugetlb.c | 3 +++ > > mm/hugetlb_vmemmap.c | 8 ++++++++ > > mm/hugetlb_vmemmap.h | 6 ++++++ > > 3 files changed, 17 insertions(+) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 3133dbd89696..1bde5e234d5c 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1833,6 +1833,9 @@ static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) > > { > > struct folio *folio, *t_folio; > > + /* First restore vmemmap for all pages on list. */ > > + hugetlb_vmemmap_restore_folios(h, list); > > + > > list_for_each_entry_safe(folio, t_folio, list, lru) { > > update_and_free_hugetlb_folio(h, folio, false); > > cond_resched(); > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > > index 147018a504a6..d5e6b6c76dce 100644 > > --- a/mm/hugetlb_vmemmap.c > > +++ b/mm/hugetlb_vmemmap.c > > @@ -479,6 +479,14 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > > return ret; > > } > > Because it is a void function, I'd like to add a comment here like: > > This function only tries to restore a list of folios' vmemmap pages and > does not guarantee that the restoration will succeed after it returns. Will do. Thanks! -- Mike Kravetz