On Tue 06-04-21 09:49:13, Mike Kravetz wrote: > On 4/6/21 2:56 AM, Michal Hocko wrote: > > On Mon 05-04-21 16:00:39, Mike Kravetz wrote: [...] > >> @@ -2298,6 +2312,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, > >> /* > >> * Freed from under us. Drop new_page too. > >> */ > >> + remove_hugetlb_page(h, new_page, false); > >> update_and_free_page(h, new_page); > >> goto unlock; > >> } else if (page_count(old_page)) { > >> @@ -2305,6 +2320,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, > >> * Someone has grabbed the page, try to isolate it here. > >> * Fail with -EBUSY if not possible. > >> */ > >> + remove_hugetlb_page(h, new_page, false); > >> update_and_free_page(h, new_page); > >> spin_unlock(&hugetlb_lock); > >> if (!isolate_huge_page(old_page, list)) > > > > the page is not enqued anywhere here so remove_hugetlb_page would blow > > when linked list debugging is enabled. > > I also thought this would be an issue. However, INIT_LIST_HEAD would > have been called for the page so, OK, this is true for a freshly allocated hugetlb page (prep_new_huge_page. It's a very sublte dependency though. In case somebody ever wants to fortify linked lists and decides to check list_del on an empty list then this would wait silently to blow up. > Going forward, I agree it would be better to perhaps add a list_empty > check so that things do not blow up if the debugging code is changed. Yes this is less tricky then a bool flag or making more stages of the tear down. 2 stages are more than enough IMHO. > At one time I also thought of splitting the functionality in > alloc_fresh_huge_page and prep_new_huge_page so that it would only > allocate/prep the page but not increment nr_huge_pages. We already have that distinction. alloc_buddy_huge_page is there to allocate a fresh huge page without any hstate accunting. Considering that giga pages are not supported for the migration anyway, maybe this would make Oscar's work slightly less tricky? -- Michal Hocko SUSE Labs