Currently, if the THP enabled policy is "always", or the mode is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage is allocated on a page fault if the pud or pmd is empty. This yields the best VA translation performance, but increases memory consumption if some small page ranges within the huge page are never accessed. An alternate behavior for such page faults is to install a hugepage only when a region is actually found to be (almost) fully mapped and active. This is a compromise between translation performance and memory consumption. Currently there is no way for an application to choose this compromise for the page fault conditions above. With this change, when an application issues MADV_DONTNEED on a memory region, the region is marked as "space-efficient". For such regions, a hugepage is not immediately allocated on first write. Instead, it is left to the khugepaged thread to do delayed hugepage promotion depending on whether the region is actually mapped and active. When application issues MADV_HUGEPAGE, the region is marked again as non-space-efficient wherein hugepage is allocated on first touch. Orabug: 26910556 Reviewed-by: Steve Sistare <steven.sistare@xxxxxxxxxx> Signed-off-by: Nitin Gupta <nitin.m.gupta@xxxxxxxxxx> --- include/linux/mm_types.h | 1 + mm/khugepaged.c | 1 + mm/madvise.c | 1 + mm/memory.c | 6 ++++-- 4 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index cfd0ac4..6d0783a 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -339,6 +339,7 @@ struct vm_area_struct { struct mempolicy *vm_policy; /* NUMA policy for the VMA */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; + bool space_efficient; } __randomize_layout; struct core_thread { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index ea4ff25..2f4037a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -319,6 +319,7 @@ int hugepage_madvise(struct vm_area_struct *vma, #endif *vm_flags &= ~VM_NOHUGEPAGE; *vm_flags |= VM_HUGEPAGE; + vma->space_efficient = false; /* * If the vma become good for khugepaged to scan, * register it here without waiting a page fault that diff --git a/mm/madvise.c b/mm/madvise.c index 751e97a..b2ec07b 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -508,6 +508,7 @@ static long madvise_dontneed_single_vma(struct vm_area_struct *vma, unsigned long start, unsigned long end) { zap_page_range(vma, start, end - start); + vma->space_efficient = true; return 0; } diff --git a/mm/memory.c b/mm/memory.c index 5eb3d25..6485014 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4001,7 +4001,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, vmf.pud = pud_alloc(mm, p4d, address); if (!vmf.pud) return VM_FAULT_OOM; - if (pud_none(*vmf.pud) && transparent_hugepage_enabled(vma)) { + if (pud_none(*vmf.pud) && transparent_hugepage_enabled(vma) + && !vma->space_efficient) { ret = create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -4027,7 +4028,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, vmf.pmd = pmd_alloc(mm, vmf.pud, address); if (!vmf.pmd) return VM_FAULT_OOM; - if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { + if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma) + && !vma->space_efficient) { ret = create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; -- 2.9.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>