On 3/25/21 7:10 PM, Miaohe Lin wrote: > On 2021/3/25 8:28, Mike Kravetz wrote: >> The new remove_hugetlb_page() routine is designed to remove a hugetlb >> page from hugetlbfs processing. It will remove the page from the active >> or free list, update global counters and set the compound page >> destructor to NULL so that PageHuge() will return false for the 'page'. >> After this call, the 'page' can be treated as a normal compound page or >> a collection of base size pages. >> >> remove_hugetlb_page is to be called with the hugetlb_lock held. >> >> Creating this routine and separating functionality is in preparation for >> restructuring code to reduce lock hold times. >> >> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> >> --- >> mm/hugetlb.c | 70 +++++++++++++++++++++++++++++++++------------------- >> 1 file changed, 45 insertions(+), 25 deletions(-) >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 404b0b1c5258..3938ec086b5c 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -1327,6 +1327,46 @@ static inline void destroy_compound_gigantic_page(struct page *page, >> unsigned int order) { } >> #endif >> >> +/* >> + * Remove hugetlb page from lists, and update dtor so that page appears >> + * as just a compound page. A reference is held on the page. >> + * NOTE: hugetlb specific page flags stored in page->private are not >> + * automatically cleared. These flags may be used in routines >> + * which operate on the resulting compound page. > > It seems HPageFreed and HPageTemporary is cleared. Which hugetlb specific page flags > is reserverd here and why? Could you please give a simple example to clarify > this in the comment to help understand this NOTE? > I will remove that NOTE: in the comment to avoid any confusion. The NOTE was add in the RFC that contained a separate patch to add a flag that tracked huge pages allocated from CMA. That flag needed to remain for subsequent freeing of such pages. This is no longer needed. > The code looks good to me. Many thanks! > Reviewed-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Thanks, -- Mike Kravetz