On 09/19/23 14:09, Muchun Song wrote: > > > On 2023/9/19 07:01, Mike Kravetz wrote: > > Now that batching of hugetlb vmemmap optimization processing is possible, > > batch the freeing of vmemmap pages. When freeing vmemmap pages for a > > hugetlb page, we add them to a list that is freed after the entire batch > > has been processed. > > > > This enhances the ability to return contiguous ranges of memory to the > > low level allocators. > > > > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > > Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > > One nit bellow. > > > --- > > mm/hugetlb_vmemmap.c | 85 ++++++++++++++++++++++++++++++-------------- > > 1 file changed, 59 insertions(+), 26 deletions(-) > > > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > > index 463a4037ec6e..147ed15bcae4 100644 > > --- a/mm/hugetlb_vmemmap.c > > +++ b/mm/hugetlb_vmemmap.c > > @@ -222,6 +222,9 @@ static void free_vmemmap_page_list(struct list_head *list) > > { > > struct page *page, *next; > > + if (list_empty(list)) > > + return; > > It seems unnecessary since the following "list_for_each_entry_safe" > could handle empty-list case. Right? > Yes, it is an over-optimization that is not really necessary. I will remove it. -- Mike Kravetz