On 3/18/20 3:15 PM, Dave Hansen wrote: > Hi Mike, > > The series looks like a great idea to me. One nit on the x86 bits, > though... > >> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c >> index 5bfd5aef5378..51e6208fdeec 100644 >> --- a/arch/x86/mm/hugetlbpage.c >> +++ b/arch/x86/mm/hugetlbpage.c >> @@ -181,16 +181,25 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr, >> #endif /* CONFIG_HUGETLB_PAGE */ >> >> #ifdef CONFIG_X86_64 >> +bool __init arch_hugetlb_valid_size(unsigned long long size) >> +{ >> + if (size == PMD_SIZE) >> + return true; >> + else if (size == PUD_SIZE && boot_cpu_has(X86_FEATURE_GBPAGES)) >> + return true; >> + else >> + return false; >> +} > > I'm pretty sure it's possible to have a system without 2M/PMD page > support. We even have a handy-dandy comment about it in > arch/x86/include/asm/required-features.h: > > #ifdef CONFIG_X86_64 > #ifdef CONFIG_PARAVIRT > /* Paravirtualized systems may not have PSE or PGE available */ > #define NEED_PSE 0 > ... > > I *think* you need an X86_FEATURE_PSE check here to be totally correct. > > if (size == PMD_SIZE && cpu_feature_enabled(X86_FEATURE_PSE)) > return true; > > BTW, I prefer cpu_feature_enabled() to boot_cpu_has() because it > includes disabled-features checking. I don't think any of it matters > for these specific features, but I generally prefer it on principle. Sounds good. I'll incorporate those changes into a v2, unless someone else with has a different opinion. BTW, this patch should not really change the way the code works today. It is mostly a movement of code. Unless I am missing something, the existing code will always allow setup of PMD_SIZE hugetlb pages. -- Mike Kravetz