The quilt patch titled Subject: mm/swapfile: fix lost swap bits in unuse_pte() has been removed from the -mm tree. Its filename was mm-swapfile-fix-lost-swap-bits-in-unuse_pte.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Miaohe Lin <linmiaohe@xxxxxxxxxx> Subject: mm/swapfile: fix lost swap bits in unuse_pte() This is observed by code review only but not any real report. When we turn off swapping we could have lost the bits stored in the swap ptes. The new rmap-exclusive bit is fine since that turned into a page flag, but not for soft-dirty and uffd-wp. Add them. Link: https://lkml.kernel.org/r/20220424091105.48374-3-linmiaohe@xxxxxxxxxx Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Suggested-by: Peter Xu <peterx@xxxxxxxxxx> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: David Howells <dhowells@xxxxxxxxxx> Cc: NeilBrown <neilb@xxxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> Cc: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/swapfile.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) --- a/mm/swapfile.c~mm-swapfile-fix-lost-swap-bits-in-unuse_pte +++ a/mm/swapfile.c @@ -1783,7 +1783,7 @@ static int unuse_pte(struct vm_area_stru { struct page *swapcache; spinlock_t *ptl; - pte_t *pte; + pte_t *pte, new_pte; int ret = 1; swapcache = page; @@ -1832,8 +1832,12 @@ static int unuse_pte(struct vm_area_stru page_add_new_anon_rmap(page, vma, addr); lru_cache_add_inactive_or_unevictable(page, vma); } - set_pte_at(vma->vm_mm, addr, pte, - pte_mkold(mk_pte(page, vma->vm_page_prot))); + new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot)); + if (pte_swp_soft_dirty(*pte)) + new_pte = pte_mksoft_dirty(new_pte); + if (pte_swp_uffd_wp(*pte)) + new_pte = pte_mkuffd_wp(new_pte); + set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); out: pte_unmap_unlock(pte, ptl); _ Patches currently in -mm which might be from linmiaohe@xxxxxxxxxx are mm-vmscan-take-min_slab_pages-into-account-when-try-to-call-shrink_node.patch mm-vmscan-add-a-comment-about-madv_free-pages-check-in-folio_check_dirty_writeback.patch mm-vmscan-introduce-helper-function-reclaim_page_list.patch mm-vmscan-take-all-base-pages-of-thp-into-account-when-race-with-speculative-reference.patch mm-vmscan-remove-obsolete-comment-in-kswapd_run.patch mm-vmscan-use-helper-folio_is_file_lru.patch mm-vmscan-use-helper-folio_is_file_lru-fix.patch mm-z3fold-fix-sheduling-while-atomic.patch mm-z3fold-fix-possible-null-pointer-dereferencing.patch mm-z3fold-remove-buggy-use-of-stale-list-for-allocation.patch mm-z3fold-throw-warning-on-failure-of-trylock_page-in-z3fold_alloc.patch revert-mm-z3foldc-allow-__gfp_highmem-in-z3fold_alloc.patch mm-z3fold-put-z3fold-page-back-into-unbuddied-list-when-reclaim-or-migration-fails.patch mm-z3fold-always-clear-page_claimed-under-z3fold-page-lock.patch mm-z3fold-fix-z3fold_reclaim_page-races-with-z3fold_free.patch mm-z3fold-fix-z3fold_page_migrate-races-with-z3fold_map.patch mm-swap-use-helper-is_swap_pte-in-swap_vma_readahead.patch mm-swap-use-helper-macro-__attr_rw.patch mm-swap-fold-__swap_info_get-into-its-sole-caller.patch mm-swap-remove-unneeded-return-value-of-free_swap_slot.patch mm-swap-print-bad-swap-offset-entry-in-get_swap_device.patch mm-swap-remove-buggy-cache-nr-check-in-refill_swap_slots_cache.patch mm-swap-remove-unneeded-p-=-null-check-in-__swap_duplicate.patch mm-swap-make-page_swapcount-and-__lru_add_drain_all.patch mm-swap-avoid-calling-swp_swap_info-when-try-to-check-swp_stable_writes.patch mm-swap-add-helper-swap_offset_available.patch mm-swap-fix-the-obsolete-comment-for-swp_type_shift.patch mm-swap-clean-up-the-comment-of-find_next_to_unuse.patch mm-swap-fix-the-comment-of-get_kernel_pages.patch mm-swap-fix-comment-about-swap-extent.patch