PMD sharing for hugetlb mappings has been present for quite some time. However, specific conditions must be met for mappings to be shared. One of those conditions is that the mapping must include all pages that can be mapped by a PUD. To help facilitate this, the mapping should be PUD_SIZE aligned. The only way for a user to get PUD_SIZE alignment is to pass an address to mmap() or shmat(). If the user does not pass an address the mapping will be huge page size aligned. To better utilize huge PMD sharing, attempt to PUD_SIZE align mappings if the following conditions are met: - Address passed to mmap or shmat is NULL - The mapping is flaged as shared - The mapping is at least PUD_SIZE in length If a PUD_SIZE aligned mapping can not be created, then fall back to a huge page size mapping. Currently, only arm64 and x86 support PMD sharing. x86 has HAVE_ARCH_HUGETLB_UNMAPPED_AREA (where code changes are made). arm64 uses the architecture independent code. Mike Kravetz (2): mm/hugetlbfs: Attempt PUD_SIZE mapping alignment if PMD sharing enabled x86/hugetlb: Attempt PUD_SIZE mapping alignment if PMD sharing enabled arch/x86/mm/hugetlbpage.c | 64 ++++++++++++++++++++++++++++++++++++++++++++--- fs/hugetlbfs/inode.c | 29 +++++++++++++++++++-- 2 files changed, 88 insertions(+), 5 deletions(-) -- 2.4.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>