The quilt patch titled Subject: mm/memory: fix folio_set_dirty() vs. folio_mark_dirty() in zap_pte_range() has been removed from the -mm tree. Its filename was mm-memory-fix-folio_set_dirty-vs-folio_mark_dirty-in-zap_pte_range.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/memory: fix folio_set_dirty() vs. folio_mark_dirty() in zap_pte_range() Date: Mon, 22 Jan 2024 18:17:51 +0100 The correct folio replacement for "set_page_dirty()" is "folio_mark_dirty()", not "folio_set_dirty()". Using the latter won't properly inform the FS using the dirty_folio() callback. This has been found by code inspection, but likely this can result in some real trouble when zapping dirty PTEs that point at clean pagecache folios. Yuezhang Mo said: "Without this fix, testing the latest exfat with xfstests, test cases generic/029 and generic/030 will fail." Link: https://lkml.kernel.org/r/20240122171751.272074-1-david@xxxxxxxxxx Fixes: c46265030b0f ("mm/memory: page_remove_rmap() -> folio_remove_rmap_pte()") Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Reported-by: Ryan Roberts <ryan.roberts@xxxxxxx> Closes: https://lkml.kernel.org/r/2445cedb-61fb-422c-8bfb-caf0a2beed62@xxxxxxx Reviewed-by: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Reviewed-by: Yuezhang Mo <Yuezhang.Mo@xxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/memory.c~mm-memory-fix-folio_set_dirty-vs-folio_mark_dirty-in-zap_pte_range +++ a/mm/memory.c @@ -1464,7 +1464,7 @@ static unsigned long zap_pte_range(struc delay_rmap = 0; if (!folio_test_anon(folio)) { if (pte_dirty(ptent)) { - folio_set_dirty(folio); + folio_mark_dirty(folio); if (tlb_delay_rmap(tlb)) { delay_rmap = 1; force_flush = 1; _ Patches currently in -mm which might be from david@xxxxxxxxxx are