On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@xxxxxxxxxx> wrote: > > On 30.11.20 16:18, Muchun Song wrote: > > We only can free the tail vmemmap pages of HugeTLB to the buddy allocator > > when the size of struct page is a power of two. > > > > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > > --- > > mm/hugetlb_vmemmap.c | 5 +++++ > > 1 file changed, 5 insertions(+) > > > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > > index 51152e258f39..ad8fc61ea273 100644 > > --- a/mm/hugetlb_vmemmap.c > > +++ b/mm/hugetlb_vmemmap.c > > @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h) > > unsigned int nr_pages = pages_per_huge_page(h); > > unsigned int vmemmap_pages; > > > > + if (!is_power_of_2(sizeof(struct page))) { > > + pr_info("disable freeing vmemmap pages for %s\n", h->name); > > I'd just drop that pr_info(). Users are able to observe that it's > working (below), so they are able to identify that it's not working as well. The below is just a pr_debug. Do you suggest converting it to pr_info? > > > + return; > > + } > > + > > vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; > > /* > > * The head page and the first tail page are not to be freed to buddy > > > > Please squash this patch into the enabling patch and add a comment > instead, like > > /* We cannot optimize if a "struct page" crosses page boundaries. */ > Will do. Thanks. > -- > Thanks, > > David / dhildenb > -- Yours, Muchun