From: Barry Song <v-songbaohua@xxxxxxxx> For the !folio_test_anon(folio) case, we can now invoke folio_add_new_anon_rmap() with the rmap flags set to either EXCLUSIVE or non-EXCLUSIVE. This action will suppress the VM_WARN_ON_FOLIO check within __folio_add_anon_rmap() while initiating the process of bringing up mTHP swapin. static __always_inline void __folio_add_anon_rmap(struct folio *folio, struct page *page, int nr_pages, struct vm_area_struct *vma, unsigned long address, rmap_t flags, enum rmap_level level) { ... if (unlikely(!folio_test_anon(folio))) { VM_WARN_ON_FOLIO(folio_test_large(folio) && level != RMAP_LEVEL_PMD, folio); } ... } It also improves the code’s readability. Currently, all new anonymous folios calling folio_add_anon_rmap_ptes() are order-0. This ensures that new folios cannot be partially exclusive; they are either entirely exclusive or entirely shared. Suggested-by: David Hildenbrand <david@xxxxxxxxxx> Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> Tested-by: Shuai Yuan <yuanshuai@xxxxxxxx> --- mm/memory.c | 8 ++++++++ mm/swapfile.c | 13 +++++++++++-- 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 1f24ecdafe05..620654c13b2f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4339,6 +4339,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(folio != swapcache && swapcache)) { folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); + } else if (!folio_test_anon(folio)) { + /* + * We currently only expect small !anon folios, for which we now + * that they are either fully exclusive or fully shared. If we + * ever get large folios here, we have to be careful. + */ + VM_WARN_ON_ONCE(folio_test_large(folio)); + folio_add_new_anon_rmap(folio, vma, address, rmap_flags); } else { folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address, rmap_flags); diff --git a/mm/swapfile.c b/mm/swapfile.c index ae1d2700f6a3..69efa1a57087 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1908,8 +1908,17 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); if (pte_swp_exclusive(old_pte)) rmap_flags |= RMAP_EXCLUSIVE; - - folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags); + /* + * We currently only expect small !anon folios, for which we now that + * they are either fully exclusive or fully shared. If we ever get + * large folios here, we have to be careful. + */ + if (!folio_test_anon(folio)) { + VM_WARN_ON_ONCE(folio_test_large(folio)); + folio_add_new_anon_rmap(folio, vma, addr, rmap_flags); + } else { + folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags); + } } else { /* ksm created a completely new copy */ folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); -- 2.34.1