On Mon, Jun 24, 2019 at 12:21:08PM +0800, Pingfan Liu wrote: > The current pfn_range_valid_gigantic() rejects the pud huge page allocation > if there is a pmd huge page inside the candidate range. > > But pud huge resource is more rare, which should align on 1GB on x86. It is > worth to allow migrating away pmd huge page to make room for a pud huge > page. > > The same logic is applied to pgd and pud huge pages. I'm sorry but I don't quite understand why we should do this. Is this a bug or an optimization? It sounds like an optimization. > > Signed-off-by: Pingfan Liu <kernelfans@xxxxxxxxx> > Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > Cc: Oscar Salvador <osalvador@xxxxxxx> > Cc: David Hildenbrand <david@xxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: linux-kernel@xxxxxxxxxxxxxxx > --- > mm/hugetlb.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ac843d3..02d1978 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1081,7 +1081,11 @@ static bool pfn_range_valid_gigantic(struct zone *z, > unsigned long start_pfn, unsigned long nr_pages) > { > unsigned long i, end_pfn = start_pfn + nr_pages; > - struct page *page; > + struct page *page = pfn_to_page(start_pfn); > + > + if (PageHuge(page)) > + if (compound_order(compound_head(page)) >= nr_pages) I don't think you want compound_order() here. Ira > + return false; > > for (i = start_pfn; i < end_pfn; i++) { > if (!pfn_valid(i)) > @@ -1098,8 +1102,6 @@ static bool pfn_range_valid_gigantic(struct zone *z, > if (page_count(page) > 0) > return false; > > - if (PageHuge(page)) > - return false; > } > > return true; > -- > 2.7.5 >