The patch titled Subject: mm: rmap: fix cache flush on THP pages has been added to the -mm tree. Its filename is mm-rmap-fix-cache-flush-on-thp-pages.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-rmap-fix-cache-flush-on-thp-pages.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-rmap-fix-cache-flush-on-thp-pages.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Muchun Song <songmuchun@xxxxxxxxxxxxx> Subject: mm: rmap: fix cache flush on THP pages Patch series "Fix some bugs related to ramp and dax", v5. Patch 1-2 fixes a cache flush bug, because subsequent patches depend on those on those changes, there are placed in this series. Patch 3-4 are preparation for fixing a dax bug in patch 5. Patch 6 is code cleanup since the previous patch remove the usage of follow_invalidate_pte(). This patch (of 6): flush_cache_page() only removes a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Link: https://lkml.kernel.org/r/20220318074529.5261-1-songmuchun@xxxxxxxxxxxxx Link: https://lkml.kernel.org/r/20220318074529.5261-2-songmuchun@xxxxxxxxxxxxx Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Reviewed-by: Yang Shi <shy828301@xxxxxxxxx> Reviewed-by: Dan Williams <dan.j.williams@xxxxxxxxx> Reviewed-by: Christoph Hellwig <hch@xxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Cc: Ralph Campbell <rcampbell@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Xiyu Yang <xiyuyang19@xxxxxxxxxxxx> Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Ross Zwisler <zwisler@xxxxxxxxxx> Cc: Xiongchun Duan <duanxiongchun@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/rmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/mm/rmap.c~mm-rmap-fix-cache-flush-on-thp-pages +++ a/mm/rmap.c @@ -970,7 +970,8 @@ static bool page_mkclean_one(struct foli if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, folio_pfn(folio)); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); _ Patches currently in -mm which might be from songmuchun@xxxxxxxxxxxxx are mm-kfence-fix-objcgs-vector-allocation.patch mm-rmap-fix-cache-flush-on-thp-pages.patch dax-fix-cache-flush-on-pmd-mapped-pages.patch mm-rmap-introduce-pfn_mkclean_range-to-cleans-ptes.patch mm-pvmw-add-support-for-walking-devmap-pages.patch dax-fix-missing-writeprotect-the-pte-entry.patch dax-fix-missing-writeprotect-the-pte-entry-v6.patch mm-simplify-follow_invalidate_pte.patch