The patch titled Subject: thp, mlock: do not allow huge pages in mlocked area has been removed from the -mm tree. Its filename was thp-mlock-do-not-allow-huge-pages-in-mlocked-area.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Subject: thp, mlock: do not allow huge pages in mlocked area With new refcounting THP can belong to several VMAs. This makes tricky to track THP pages, when they partially mlocked. It can lead to leaking mlocked pages to non-VM_LOCKED vmas and other problems. With this patch we will split all pages on mlock and avoid fault-in/collapse new THP in VM_LOCKED vmas. I've tried alternative approach: do not mark THP pages mlocked and keep them on normal LRUs. This way vmscan could try to split huge pages on memory pressure and free up subpages which doesn't belong to VM_LOCKED vmas. But this is user-visible change: we screw up Mlocked accouting reported in meminfo, so I had to leave this approach aside. We can bring something better later, but this should be good enough for now. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Tested-by: Sasha Levin <sasha.levin@xxxxxxxxxx> Tested-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxxxxxxx> Acked-by: Jerome Marchand <jmarchan@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> Cc: Steve Capper <steve.capper@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/gup.c | 3 +- mm/huge_memory.c | 5 +++- mm/memory.c | 3 +- mm/mlock.c | 51 ++++++++++++++++----------------------------- 4 files changed, 27 insertions(+), 35 deletions(-) diff -puN mm/gup.c~thp-mlock-do-not-allow-huge-pages-in-mlocked-area mm/gup.c --- a/mm/gup.c~thp-mlock-do-not-allow-huge-pages-in-mlocked-area +++ a/mm/gup.c @@ -927,7 +927,8 @@ long populate_vma_page_range(struct vm_a gup_flags = FOLL_TOUCH | FOLL_POPULATE | FOLL_MLOCK; if (vma->vm_flags & VM_LOCKONFAULT) gup_flags &= ~FOLL_POPULATE; - + if (vma->vm_flags & VM_LOCKED) + gup_flags |= FOLL_SPLIT; /* * We want to touch writable mappings with a write fault in order * to break COW, except for shared mappings because these don't COW diff -puN mm/huge_memory.c~thp-mlock-do-not-allow-huge-pages-in-mlocked-area mm/huge_memory.c --- a/mm/huge_memory.c~thp-mlock-do-not-allow-huge-pages-in-mlocked-area +++ a/mm/huge_memory.c @@ -842,6 +842,8 @@ int do_huge_pmd_anonymous_page(struct mm if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) return VM_FAULT_FALLBACK; + if (vma->vm_flags & VM_LOCKED) + return VM_FAULT_FALLBACK; if (unlikely(anon_vma_prepare(vma))) return VM_FAULT_OOM; if (unlikely(khugepaged_enter(vma, vma->vm_flags))) @@ -2555,7 +2557,8 @@ static bool hugepage_vma_check(struct vm if ((!(vma->vm_flags & VM_HUGEPAGE) && !khugepaged_always()) || (vma->vm_flags & VM_NOHUGEPAGE)) return false; - + if (vma->vm_flags & VM_LOCKED) + return false; if (!vma->anon_vma || vma->vm_ops) return false; if (is_vma_temporary_stack(vma)) diff -puN mm/memory.c~thp-mlock-do-not-allow-huge-pages-in-mlocked-area mm/memory.c --- a/mm/memory.c~thp-mlock-do-not-allow-huge-pages-in-mlocked-area +++ a/mm/memory.c @@ -2166,7 +2166,8 @@ static int wp_page_copy(struct mm_struct pte_unmap_unlock(page_table, ptl); mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); - if (old_page) { + /* THP pages are never mlocked */ + if (old_page && !PageTransCompound(old_page)) { /* * Don't let another task, with possibly unlocked vma, * keep the mlocked page. diff -puN mm/mlock.c~thp-mlock-do-not-allow-huge-pages-in-mlocked-area mm/mlock.c --- a/mm/mlock.c~thp-mlock-do-not-allow-huge-pages-in-mlocked-area +++ a/mm/mlock.c @@ -443,39 +443,26 @@ void munlock_vma_pages_range(struct vm_a page = follow_page_mask(vma, start, FOLL_GET | FOLL_DUMP, &page_mask); - if (page && !IS_ERR(page)) { - if (PageTransHuge(page)) { - lock_page(page); - /* - * Any THP page found by follow_page_mask() may - * have gotten split before reaching - * munlock_vma_page(), so we need to recompute - * the page_mask here. - */ - page_mask = munlock_vma_page(page); - unlock_page(page); - put_page(page); /* follow_page_mask() */ - } else { - /* - * Non-huge pages are handled in batches via - * pagevec. The pin from follow_page_mask() - * prevents them from collapsing by THP. - */ - pagevec_add(&pvec, page); - zone = page_zone(page); - zoneid = page_zone_id(page); + if (page && !IS_ERR(page) && !PageTransCompound(page)) { + /* + * Non-huge pages are handled in batches via + * pagevec. The pin from follow_page_mask() + * prevents them from collapsing by THP. + */ + pagevec_add(&pvec, page); + zone = page_zone(page); + zoneid = page_zone_id(page); - /* - * Try to fill the rest of pagevec using fast - * pte walk. This will also update start to - * the next page to process. Then munlock the - * pagevec. - */ - start = __munlock_pagevec_fill(&pvec, vma, - zoneid, start, end); - __munlock_pagevec(&pvec, zone); - goto next; - } + /* + * Try to fill the rest of pagevec using fast + * pte walk. This will also update start to + * the next page to process. Then munlock the + * pagevec. + */ + start = __munlock_pagevec_fill(&pvec, vma, + zoneid, start, end); + __munlock_pagevec(&pvec, zone); + goto next; } /* It's a bug to munlock in the middle of a THP page */ VM_BUG_ON((start >> PAGE_SHIFT) & page_mask); _ Patches currently in -mm which might be from kirill.shutemov@xxxxxxxxxxxxxxx are thp-update-documentation.patch thp-allow-mlocked-thp-again.patch mm-prepare-page_referenced-and-page_idle-to-new-thp-refcounting.patch thp-add-debugfs-handle-to-split-all-huge-pages.patch thp-increase-split_huge_page-success-rate.patch thp-fix-split_huge_page-after-mremap-of-thp.patch memblock-fix-section-mismatch.patch mm-fix-locking-order-in-mm_take_all_locks.patch mm-make-optimistic-check-for-swapin-readahead-fix.patch mm-make-swapin-readahead-to-improve-thp-collapse-rate-fix.patch mm-make-swapin-readahead-to-improve-thp-collapse-rate-fix-2.patch mm-make-swapin-readahead-to-improve-thp-collapse-rate-fix-3.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html