On Tue, Apr 13, 2021 at 02:33:41PM -0700, Mike Kravetz wrote: > > -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) > > +/* > > + * Must be called with the hugetlb lock held > > + */ > > +static void __prep_account_new_huge_page(struct hstate *h, int nid) > > +{ > > + h->nr_huge_pages++; > > + h->nr_huge_pages_node[nid]++; > > I would prefer if we also move setting the destructor to this routine. > set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); Uhm, but that is the routine that does the accounting, it feels wrong here, plus... > > That way, PageHuge() will be false until it 'really' is a huge page. > If not, we could potentially go into that retry loop in > dissolve_free_huge_page or alloc_and_dissolve_huge_page in patch 5. ...I do not follow here, could you please elaborate some more? Unless I am missing something, behaviour should not be any different with this patch. Thanks -- Oscar Salvador SUSE L3