On 03/30/2016 07:26 PM, Naoya Horiguchi wrote: > On Tue, Mar 29, 2016 at 10:05:31AM -0700, Mike Kravetz wrote: >> On 03/29/2016 01:35 AM, Ingo Molnar wrote: >>> >>> * Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote: >>> >>>> When creating a hugetlb mapping, attempt PUD_SIZE alignment if the >>>> following conditions are met: >>>> - Address passed to mmap or shmat is NULL >>>> - The mapping is flaged as shared >>>> - The mapping is at least PUD_SIZE in length >>>> If a PUD_SIZE aligned mapping can not be created, then fall back to a >>>> huge page size mapping. >>>> >>>> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> >>>> --- >>>> arch/x86/mm/hugetlbpage.c | 64 ++++++++++++++++++++++++++++++++++++++++++++--- >>>> 1 file changed, 61 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c >>>> index 42982b2..4f53af5 100644 >>>> --- a/arch/x86/mm/hugetlbpage.c >>>> +++ b/arch/x86/mm/hugetlbpage.c >>>> @@ -78,14 +78,39 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, >>>> { >>>> struct hstate *h = hstate_file(file); >>>> struct vm_unmapped_area_info info; >>>> + bool pud_size_align = false; >>>> + unsigned long ret_addr; >>>> + >>>> + /* >>>> + * If PMD sharing is enabled, align to PUD_SIZE to facilitate >>>> + * sharing. Only attempt alignment if no address was passed in, >>>> + * flags indicate sharing and size is big enough. >>>> + */ >>>> + if (IS_ENABLED(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) && >>>> + !addr && flags & MAP_SHARED && len >= PUD_SIZE) >>>> + pud_size_align = true; >>>> >>>> info.flags = 0; >>>> info.length = len; >>>> info.low_limit = current->mm->mmap_legacy_base; >>>> info.high_limit = TASK_SIZE; >>>> - info.align_mask = PAGE_MASK & ~huge_page_mask(h); >>>> + if (pud_size_align) >>>> + info.align_mask = PAGE_MASK & (PUD_SIZE - 1); >>>> + else >>>> + info.align_mask = PAGE_MASK & ~huge_page_mask(h); >>>> info.align_offset = 0; >>>> - return vm_unmapped_area(&info); >>>> + ret_addr = vm_unmapped_area(&info); >>>> + >>>> + /* >>>> + * If failed with PUD_SIZE alignment, try again with huge page >>>> + * size alignment. >>>> + */ >>>> + if ((ret_addr & ~PAGE_MASK) && pud_size_align) { >>>> + info.align_mask = PAGE_MASK & ~huge_page_mask(h); >>>> + ret_addr = vm_unmapped_area(&info); >>>> + } >>> >>> So AFAICS 'ret_addr' is either page aligned, or is an error code. Wouldn't it be a >>> lot easier to read to say: >>> >>> if ((long)ret_addr > 0 && pud_size_align) { >>> info.align_mask = PAGE_MASK & ~huge_page_mask(h); >>> ret_addr = vm_unmapped_area(&info); >>> } >>> >>> return ret_addr; >>> >>> to make it clear that it's about error handling, not some alignment >>> requirement/restriction? >> >> Yes, I agree that is easier to read. However, it assumes that process >> virtual addresses can never evaluate to a negative long value. This may >> be the case for x86_64 today. But, there are other architectures where >> this is not the case. I know this is x86 specific code, but might it be >> possible that x86 virtual addresses could be negative longs in the future? >> >> It appears that all callers of vm_unmapped_area() are using the page aligned >> check to determine error. I would prefer to do the same, and can add >> comments to make that more clear. > > IS_ERR_VALUE() might be helpful? > Thanks Naoya, I'll change all this to use IS_ERR_VALUE(). -- Mike Kravetz -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>