The patch titled Subject: mm/thp: make is_huge_zero_pmd() safe and quicker has been added to the -mm tree. Its filename is mm-thp-make-is_huge_zero_pmd-safe-and-quicker.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-thp-make-is_huge_zero_pmd-safe-and-quicker.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-thp-make-is_huge_zero_pmd-safe-and-quicker.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Hugh Dickins <hughd@xxxxxxxxxx> Subject: mm/thp: make is_huge_zero_pmd() safe and quicker Most callers of is_huge_zero_pmd() supply a pmd already verified present; but a few (notably zap_huge_pmd()) do not - it might be a pmd migration entry, in which the pfn is encoded differently from a present pmd: which might pass the is_huge_zero_pmd() test (though not on x86, since L1TF forced us to protect against that); or perhaps even crash in pmd_page() applied to a swap-like entry. Make it safe by adding pmd_present() check into is_huge_zero_pmd() itself; and make it quicker by saving huge_zero_pfn, so that is_huge_zero_pmd() will not need to do that pmd_page() lookup each time. __split_huge_pmd_locked() checked pmd_trans_huge() before: that worked, but is unnecessary now that is_huge_zero_pmd() checks present. Link: https://lkml.kernel.org/r/21ea9ca-a1f5-8b90-5e88-95fb1c49bbfa@xxxxxxxxxx Fixes: e71769ae5260 ("mm: enable thp migration for shmem thp") Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Reviewed-by: Yang Shi <shy828301@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Jue Wang <juew@xxxxxxxxxx> Cc: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> Cc: Oscar Salvador <osalvador@xxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Ralph Campbell <rcampbell@xxxxxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Wang Yugui <wangyugui@xxxxxxxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/huge_mm.h | 8 +++++++- mm/huge_memory.c | 5 ++++- 2 files changed, 11 insertions(+), 2 deletions(-) --- a/include/linux/huge_mm.h~mm-thp-make-is_huge_zero_pmd-safe-and-quicker +++ a/include/linux/huge_mm.h @@ -286,6 +286,7 @@ struct page *follow_devmap_pud(struct vm vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd); extern struct page *huge_zero_page; +extern unsigned long huge_zero_pfn; static inline bool is_huge_zero_page(struct page *page) { @@ -294,7 +295,7 @@ static inline bool is_huge_zero_page(str static inline bool is_huge_zero_pmd(pmd_t pmd) { - return is_huge_zero_page(pmd_page(pmd)); + return READ_ONCE(huge_zero_pfn) == pmd_pfn(pmd) && pmd_present(pmd); } static inline bool is_huge_zero_pud(pud_t pud) @@ -439,6 +440,11 @@ static inline bool is_huge_zero_page(str { return false; } + +static inline bool is_huge_zero_pmd(pmd_t pmd) +{ + return false; +} static inline bool is_huge_zero_pud(pud_t pud) { --- a/mm/huge_memory.c~mm-thp-make-is_huge_zero_pmd-safe-and-quicker +++ a/mm/huge_memory.c @@ -62,6 +62,7 @@ static struct shrinker deferred_split_sh static atomic_t huge_zero_refcount; struct page *huge_zero_page __read_mostly; +unsigned long huge_zero_pfn __read_mostly = ~0UL; bool transparent_hugepage_enabled(struct vm_area_struct *vma) { @@ -98,6 +99,7 @@ retry: __free_pages(zero_page, compound_order(zero_page)); goto retry; } + WRITE_ONCE(huge_zero_pfn, page_to_pfn(zero_page)); /* We take additional reference here. It will be put back by shrinker */ atomic_set(&huge_zero_refcount, 2); @@ -147,6 +149,7 @@ static unsigned long shrink_huge_zero_pa if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) { struct page *zero_page = xchg(&huge_zero_page, NULL); BUG_ON(zero_page == NULL); + WRITE_ONCE(huge_zero_pfn, ~0UL); __free_pages(zero_page, compound_order(zero_page)); return HPAGE_PMD_NR; } @@ -2071,7 +2074,7 @@ static void __split_huge_pmd_locked(stru return; } - if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) { + if (is_huge_zero_pmd(*pmd)) { /* * FIXME: Do we want to invalidate secondary mmu by calling * mmu_notifier_invalidate_range() see comments below inside _ Patches currently in -mm which might be from hughd@xxxxxxxxxx are mm-thp-fix-__split_huge_pmd_locked-on-shmem-migration-entry.patch mm-thp-make-is_huge_zero_pmd-safe-and-quicker.patch mm-thp-try_to_unmap-use-ttu_sync-for-safe-splitting.patch mm-thp-fix-vma_address-if-virtual-address-below-file-offset.patch mm-thp-unmap_mapping_page-to-fix-thp-truncate_cleanup_page.patch mm-page_vma_mapped_walk-use-page-for-pvmw-page.patch mm-page_vma_mapped_walk-settle-pagehuge-on-entry.patch mm-page_vma_mapped_walk-use-pmd_read_atomic.patch mm-page_vma_mapped_walk-use-pmde-for-pvmw-pmd.patch mm-page_vma_mapped_walk-prettify-pvmw_migration-block.patch mm-page_vma_mapped_walk-crossing-page-table-boundary.patch mm-page_vma_mapped_walk-add-a-level-of-indentation.patch mm-page_vma_mapped_walk-use-goto-instead-of-while-1.patch mm-page_vma_mapped_walk-get-vma_address_end-earlier.patch mm-thp-fix-page_vma_mapped_walk-if-thp-mapped-by-ptes.patch mm-thp-another-pvmw_sync-fix-in-page_vma_mapped_walk.patch mm-thp-remap_page-is-only-needed-on-anonymous-thp.patch mm-hwpoison_user_mappings-try_to_unmap-with-ttu_sync.patch