In 32-bit programs, the address space is limited. When the normal mmap consumes the space above TASK_UNMAPPED_BASE on legacy mode, it can still successfully obtain unmapped area below TASK_UNMAPPED_BASE, but mmap or shmat for huge pages will fail. This seems "not fair". When the request for huge pages fails, fall back to reuse mmap_min_addr ~ TASK_UNMAPPED_BASE for hugetlbfs. Signed-off-by: Shijie Hu <hushijie3@xxxxxxxxxx> --- fs/hugetlbfs/inode.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index aff8642f0c2e..0f5997394aaa 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -224,7 +224,21 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr, info.high_limit = TASK_SIZE; info.align_mask = PAGE_MASK & ~huge_page_mask(h); info.align_offset = 0; - return vm_unmapped_area(&info); + addr = vm_unmapped_area(&info); + + /* + * A failed request for huge pages very likely causes application + * failure, so fall back to the top-down function here. + */ + if (unlikely(offset_in_page(addr))) { + VM_BUG_ON(addr != -ENOMEM); + info.flags = VM_UNMAPPED_AREA_TOPDOWN; + info.low_limit = max(PAGE_SIZE, mmap_min_addr); + info.high_limit = TASK_UNMAPPED_BASE; + addr = vm_unmapped_area(&info); + } + + return addr; } #endif -- 2.12.3