The patch titled Subject: dax: fix cache flush on PMD-mapped pages has been added to the -mm tree. Its filename is dax-fix-cache-flush-on-pmd-mapped-pages.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/dax-fix-cache-flush-on-pmd-mapped-pages.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/dax-fix-cache-flush-on-pmd-mapped-pages.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Muchun Song <songmuchun@xxxxxxxxxxxxx> Subject: dax: fix cache flush on PMD-mapped pages The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. This is just a documentation issue with the respect to properly documenting the expected usage of cache flushing before modifying the pmd. However, in practice this is not a problem due to the fact that DAX is not available on architectures with virtually indexed caches per: commit d92576f1167c ("dax: does not work correctly with virtual aliasing caches") Link: https://lkml.kernel.org/r/20220403053957.10770-3-songmuchun@xxxxxxxxxxxxx Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Reviewed-by: Dan Williams <dan.j.williams@xxxxxxxxx> Reviewed-by: Christoph Hellwig <hch@xxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Ralph Campbell <rcampbell@xxxxxxxxxx> Cc: Ross Zwisler <zwisler@xxxxxxxxxx> Cc: Xiongchun Duan <duanxiongchun@xxxxxxxxxxxxx> Cc: Xiyu Yang <xiyuyang19@xxxxxxxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/fs/dax.c~dax-fix-cache-flush-on-pmd-mapped-pages +++ a/fs/dax.c @@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct add if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); _ Patches currently in -mm which might be from songmuchun@xxxxxxxxxxxxx are mm-hugetlb_vmemmap-introduce-arch_want_hugetlb_page_free_vmemmap.patch arm64-mm-hugetlb-enable-hugetlb_page_free_vmemmap-for-arm64.patch mm-hugetlb_vmemmap-cleanup-hugetlb_vmemmap-related-functions.patch mm-hugetlb_vmemmap-cleanup-hugetlb_free_vmemmap_enabled.patch mm-hugetlb_vmemmap-cleanup-config_hugetlb_page_free_vmemmap.patch mm-rmap-fix-cache-flush-on-thp-pages.patch dax-fix-cache-flush-on-pmd-mapped-pages.patch mm-rmap-introduce-pfn_mkclean_range-to-cleans-ptes.patch mm-pvmw-add-support-for-walking-devmap-pages.patch dax-fix-missing-writeprotect-the-pte-entry.patch mm-simplify-follow_invalidate_pte.patch