The patch titled Subject: mm: rmap: use flush_cache_range() to flush cache for hugetlb pages has been added to the -mm tree. Its filename is mm-rmap-use-flush_cache_range-to-flush-cache-for-hugetlb-pages.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-rmap-use-flush_cache_range-to-flush-cache-for-hugetlb-pages.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-rmap-use-flush_cache_range-to-flush-cache-for-hugetlb-pages.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Subject: mm: rmap: use flush_cache_range() to flush cache for hugetlb pages Now we will use flush_cache_page() to flush cache for anonymous hugetlb pages when unmapping or migrating a hugetlb page mapping, but the flush_cache_page() only handles a PAGE_SIZE range on some architectures (like arm32, arc and so on), which will cause potential cache issues. Thus change to use flush_cache_range() to cover the whole size of a hugetlb page. Link: https://lkml.kernel.org/r/dc903b378d1e2d26bbbe85409ab9d009631f175c.1651056365.git.baolin.wang@xxxxxxxxxxxxxxxxx Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Mina Almasry <almasrymina@xxxxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/rmap.c | 90 +++++++++++++++++++++++++++------------------------- 1 file changed, 48 insertions(+), 42 deletions(-) --- a/mm/rmap.c~mm-rmap-use-flush_cache_range-to-flush-cache-for-hugetlb-pages +++ a/mm/rmap.c @@ -1528,13 +1528,7 @@ static bool try_to_unmap_one(struct foli anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(subpage); - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - /* - * To call huge_pmd_unshare, i_mmap_rwsem must be - * held in write mode. Caller needs to explicitly - * do this outside rmap routines. - */ - VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + if (folio_test_hugetlb(folio)) { /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may @@ -1544,22 +1538,31 @@ static bool try_to_unmap_one(struct foli */ flush_cache_range(vma, range.start, range.end); - if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { - flush_tlb_range(vma, range.start, range.end); - mmu_notifier_invalidate_range(mm, range.start, - range.end); - + if (!folio_test_anon(folio)) { /* - * The ref count of the PMD page was dropped - * which is part of the way map counting - * is done for shared PMDs. Return 'true' - * here. When there is no other sharing, - * huge_pmd_unshare returns false and we will - * unmap the actual page and drop map count - * to zero. + * To call huge_pmd_unshare, i_mmap_rwsem must be + * held in write mode. Caller needs to explicitly + * do this outside rmap routines. */ - page_vma_mapped_walk_done(&pvmw); - break; + VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + + if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { + flush_tlb_range(vma, range.start, range.end); + mmu_notifier_invalidate_range(mm, range.start, + range.end); + + /* + * The ref count of the PMD page was dropped + * which is part of the way map counting + * is done for shared PMDs. Return 'true' + * here. When there is no other sharing, + * huge_pmd_unshare returns false and we will + * unmap the actual page and drop map count + * to zero. + */ + page_vma_mapped_walk_done(&pvmw); + break; + } } } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); @@ -1885,13 +1888,7 @@ static bool try_to_migrate_one(struct fo anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(subpage); - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - /* - * To call huge_pmd_unshare, i_mmap_rwsem must be - * held in write mode. Caller needs to explicitly - * do this outside rmap routines. - */ - VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + if (folio_test_hugetlb(folio)) { /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may @@ -1901,22 +1898,31 @@ static bool try_to_migrate_one(struct fo */ flush_cache_range(vma, range.start, range.end); - if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { - flush_tlb_range(vma, range.start, range.end); - mmu_notifier_invalidate_range(mm, range.start, - range.end); - + if (!folio_test_anon(folio)) { /* - * The ref count of the PMD page was dropped - * which is part of the way map counting - * is done for shared PMDs. Return 'true' - * here. When there is no other sharing, - * huge_pmd_unshare returns false and we will - * unmap the actual page and drop map count - * to zero. + * To call huge_pmd_unshare, i_mmap_rwsem must be + * held in write mode. Caller needs to explicitly + * do this outside rmap routines. */ - page_vma_mapped_walk_done(&pvmw); - break; + VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + + if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { + flush_tlb_range(vma, range.start, range.end); + mmu_notifier_invalidate_range(mm, range.start, + range.end); + + /* + * The ref count of the PMD page was dropped + * which is part of the way map counting + * is done for shared PMDs. Return 'true' + * here. When there is no other sharing, + * huge_pmd_unshare returns false and we will + * unmap the actual page and drop map count + * to zero. + */ + page_vma_mapped_walk_done(&pvmw); + break; + } } } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); _ Patches currently in -mm which might be from baolin.wang@xxxxxxxxxxxxxxxxx are mm-migrate-simplify-the-refcount-validation-when-migrating-hugetlb-mapping.patch mm-hugetlb-add-missing-cache-flushing-in-hugetlb_unshare_all_pmds.patch mm-hugetlb-considering-pmd-sharing-when-flushing-cache-tlbs.patch mm-rmap-move-the-cache-flushing-to-the-correct-place-for-hugetlb-pmd-sharing.patch mm-rmap-use-flush_cache_range-to-flush-cache-for-hugetlb-pages.patch