With this change in place, there is no need to have separate readable and writable device-exclusive entry types, and we'll merge them next separately. Note that, during fork(), we first convert the device-exclusive entries back to ordinary PTEs, and we only ever allow conversion of writable PTEs to device-exclusive -- only mprotect can currently change them to readable-device-exclusive. Consequently, we always expect PageAnonExclusive(page)==true and can_change_pte_writable()==true, unless we are dealing with soft-dirty tracking or uffd-wp. But reusing can_change_pte_writable() for now is cleaner. Link: https://lkml.kernel.org/r/20250210193801.781278-6-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Tested-by: Alistair Popple <apopple@xxxxxxxxxx> Cc: Alex Shi <alexs@xxxxxxxxxx> Cc: Danilo Krummrich <dakr@xxxxxxxxxx> Cc: Dave Airlie <airlied@xxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Jerome Glisse <jglisse@xxxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Jonathan Corbet <corbet@xxxxxxx> Cc: Karol Herbst <kherbst@xxxxxxxxxx> Cc: Liam Howlett <liam.howlett@xxxxxxxxxx> Cc: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> Cc: Lyude <lyude@xxxxxxxxxx> Cc: "Masami Hiramatsu (Google)" <mhiramat@xxxxxxxxxx> Cc: Oleg Nesterov <oleg@xxxxxxxxxx> Cc: Pasha Tatashin <pasha.tatashin@xxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Cc: SeongJae Park <sj@xxxxxxxxxx> Cc: Simona Vetter <simona.vetter@xxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Yanteng Si <si.yanteng@xxxxxxxxx> Cc: Barry Song <v-songbaohua@xxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) --- a/mm/memory.c~mm-memory-detect-writability-in-restore_exclusive_pte-through-can_change_pte_writable +++ a/mm/memory.c @@ -723,18 +723,21 @@ static void restore_exclusive_pte(struct struct folio *folio = page_folio(page); pte_t orig_pte; pte_t pte; - swp_entry_t entry; orig_pte = ptep_get(ptep); pte = pte_mkold(mk_pte(page, READ_ONCE(vma->vm_page_prot))); if (pte_swp_soft_dirty(orig_pte)) pte = pte_mksoft_dirty(pte); - entry = pte_to_swp_entry(orig_pte); if (pte_swp_uffd_wp(orig_pte)) pte = pte_mkuffd_wp(pte); - else if (is_writable_device_exclusive_entry(entry)) - pte = maybe_mkwrite(pte_mkdirty(pte), vma); + + if ((vma->vm_flags & VM_WRITE) && + can_change_pte_writable(vma, address, pte)) { + if (folio_test_dirty(folio)) + pte = pte_mkdirty(pte); + pte = pte_mkwrite(pte, vma); + } VM_BUG_ON_FOLIO(pte_write(pte) && (!folio_test_anon(folio) && PageAnonExclusive(page)), folio); _ Patches currently in -mm which might be from david@xxxxxxxxxx are mm-factor-out-large-folio-handling-from-folio_order-into-folio_large_order.patch mm-factor-out-large-folio-handling-from-folio_nr_pages-into-folio_large_nr_pages.patch mm-let-_folio_nr_pages-overlay-memcg_data-in-first-tail-page.patch mm-let-_folio_nr_pages-overlay-memcg_data-in-first-tail-page-fix.patch mm-move-hugetlb-specific-things-in-folio-to-page.patch mm-move-_pincount-in-folio-to-page-on-32bit.patch mm-move-_entire_mapcount-in-folio-to-page-on-32bit.patch mm-rmap-pass-dst_vma-to-folio_dup_file_rmap_pte-and-friends.patch mm-rmap-pass-vma-to-__folio_add_rmap.patch mm-rmap-abstract-large-mapcount-operations-for-large-folios-hugetlb.patch bit_spinlock-__always_inline-unlock-functions.patch mm-rmap-use-folio_large_nr_pages-in-add-remove-functions.patch mm-rmap-basic-mm-owner-tracking-for-large-folios-hugetlb.patch mm-copy-on-write-cow-reuse-support-for-pte-mapped-thp.patch mm-convert-folio_likely_mapped_shared-to-folio_maybe_mapped_shared.patch mm-config_no_page_mapcount-to-prepare-for-not-maintain-per-page-mapcounts-in-large-folios.patch fs-proc-page-remove-per-page-mapcount-dependency-for-proc-kpagecount-config_no_page_mapcount.patch fs-proc-task_mmu-remove-per-page-mapcount-dependency-for-pm_mmap_exclusive-config_no_page_mapcount.patch fs-proc-task_mmu-remove-per-page-mapcount-dependency-for-mapmax-config_no_page_mapcount.patch fs-proc-task_mmu-remove-per-page-mapcount-dependency-for-smaps-smaps_rollup-config_no_page_mapcount.patch mm-stop-maintaining-the-per-page-mapcount-of-large-folios-config_no_page_mapcount.patch