The patch titled Subject: mm/huge_memory: fix folio_set_dirty() vs. folio_mark_dirty() has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-huge_memory-fix-folio_set_dirty-vs-folio_mark_dirty.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-huge_memory-fix-folio_set_dirty-vs-folio_mark_dirty.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/huge_memory: fix folio_set_dirty() vs. folio_mark_dirty() Date: Mon, 22 Jan 2024 18:54:07 +0100 The correct folio replacement for "set_page_dirty()" is "folio_mark_dirty()", not "folio_set_dirty()". Using the latter won't properly inform the FS using the dirty_folio() callback. This has been found by code inspection, but likely this can result in some real trouble. Link: https://lkml.kernel.org/r/20240122175407.307992-1-david@xxxxxxxxxx Fixes: a8e61d584eda0 ("mm/huge_memory: page_remove_rmap() -> folio_remove_rmap_pmd()") Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/huge_memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/huge_memory.c~mm-huge_memory-fix-folio_set_dirty-vs-folio_mark_dirty +++ a/mm/huge_memory.c @@ -2437,7 +2437,7 @@ static void __split_huge_pmd_locked(stru page = pmd_page(old_pmd); folio = page_folio(page); if (!folio_test_dirty(folio) && pmd_dirty(old_pmd)) - folio_set_dirty(folio); + folio_mark_dirty(folio); if (!folio_test_referenced(folio) && pmd_young(old_pmd)) folio_set_referenced(folio); folio_remove_rmap_pmd(folio, page, vma); @@ -3563,7 +3563,7 @@ int set_pmd_migration_entry(struct page_ } if (pmd_dirty(pmdval)) - folio_set_dirty(folio); + folio_mark_dirty(folio); if (pmd_write(pmdval)) entry = make_writable_migration_entry(page_to_pfn(page)); else if (anon_exclusive) _ Patches currently in -mm which might be from david@xxxxxxxxxx are uprobes-use-pagesize-aligned-virtual-address-when-replacing-pages.patch mm-huge_memory-fix-folio_set_dirty-vs-folio_mark_dirty.patch arm-pgtable-define-pfn_pte_shift-on-arm-and-arm64.patch nios2-pgtable-define-pfn_pte_shift.patch powerpc-pgtable-define-pfn_pte_shift.patch risc-pgtable-define-pfn_pte_shift.patch s390-pgtable-define-pfn_pte_shift.patch sparc-pgtable-define-pfn_pte_shift.patch mm-memory-factor-out-copying-the-actual-pte-in-copy_present_pte.patch mm-memory-pass-pte-to-copy_present_pte.patch mm-memory-optimize-fork-with-pte-mapped-thp.patch mm-memory-ignore-dirty-accessed-soft-dirty-bits-in-folio_pte_batch.patch mm-memory-ignore-writable-bit-in-folio_pte_batch.patch