On 3/29/21 7:21 PM, Muchun Song wrote: > On Tue, Mar 30, 2021 at 7:24 AM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote: >> >> With the introduction of remove_hugetlb_page(), there is no need for >> update_and_free_page to hold the hugetlb lock. Change all callers to >> drop the lock before calling. >> >> With additional code modifications, this will allow loops which decrease >> the huge page pool to drop the hugetlb_lock with each page to reduce >> long hold times. >> >> The ugly unlock/lock cycle in free_pool_huge_page will be removed in >> a subsequent patch which restructures free_pool_huge_page. >> >> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> >> Acked-by: Michal Hocko <mhocko@xxxxxxxx> >> Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> >> --- >> mm/hugetlb.c | 32 +++++++++++++++++++++++++++----- >> 1 file changed, 27 insertions(+), 5 deletions(-) >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 16beabbbbe49..dec7bd0dc63d 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -1451,16 +1451,18 @@ static void __free_huge_page(struct page *page) >> >> if (HPageTemporary(page)) { >> remove_hugetlb_page(h, page, false); >> + spin_unlock(&hugetlb_lock); >> update_and_free_page(h, page); >> } else if (h->surplus_huge_pages_node[nid]) { >> /* remove the page from active list */ >> remove_hugetlb_page(h, page, true); >> + spin_unlock(&hugetlb_lock); >> update_and_free_page(h, page); >> } else { >> arch_clear_hugepage_flags(page); >> enqueue_huge_page(h, page); >> + spin_unlock(&hugetlb_lock); >> } >> - spin_unlock(&hugetlb_lock); >> } >> >> /* >> @@ -1741,7 +1743,13 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, >> list_entry(h->hugepage_freelists[node].next, >> struct page, lru); >> remove_hugetlb_page(h, page, acct_surplus); >> + /* >> + * unlock/lock around update_and_free_page is temporary >> + * and will be removed with subsequent patch. >> + */ >> + spin_unlock(&hugetlb_lock); >> update_and_free_page(h, page); >> + spin_lock(&hugetlb_lock); >> ret = 1; >> break; >> } >> @@ -1810,8 +1818,9 @@ int dissolve_free_huge_page(struct page *page) >> } >> remove_hugetlb_page(h, page, false); >> h->max_huge_pages--; >> + spin_unlock(&hugetlb_lock); >> update_and_free_page(h, head); >> - rc = 0; >> + return 0; >> } >> out: >> spin_unlock(&hugetlb_lock); >> @@ -2674,22 +2683,35 @@ static void try_to_free_low(struct hstate *h, unsigned long count, >> nodemask_t *nodes_allowed) >> { >> int i; >> + struct page *page, *next; >> + LIST_HEAD(page_list); >> >> if (hstate_is_gigantic(h)) >> return; >> >> + /* >> + * Collect pages to be freed on a list, and free after dropping lock >> + */ >> + INIT_LIST_HEAD(&page_list); > > INIT_LIST_HEAD is unnecessary. Because the macro of > LIST_HEAD already initializes the list_head structure. > Thanks. I will fix here and the same issue in patch 6. -- Mike Kravetz