From: Barry Song <v-songbaohua@xxxxxxxx> The folio_test_anon(folio)==false case within do_swap_page() has been relocated to folio_add_new_anon_rmap(). Additionally, two other callers consistently pass anonymous folios. stack 1: remove_migration_pmd -> folio_add_anon_rmap_pmd -> __folio_add_anon_rmap stack 2: __split_huge_pmd_locked -> folio_add_anon_rmap_ptes -> __folio_add_anon_rmap __folio_add_anon_rmap() only needs to handle the cases folio_test_anon(folio)==true now. Suggested-by: David Hildenbrand <david@xxxxxxxxxx> Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> --- mm/rmap.c | 17 +++-------------- 1 file changed, 3 insertions(+), 14 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index e612d999811a..e84c706c8241 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1299,21 +1299,10 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio, nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped); - if (unlikely(!folio_test_anon(folio))) { - VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); - /* - * For a PTE-mapped large folio, we only know that the single - * PTE is exclusive. Further, __folio_set_anon() might not get - * folio->index right when not given the address of the head - * page. - */ - VM_WARN_ON_FOLIO(folio_test_large(folio) && - level != RMAP_LEVEL_PMD, folio); - __folio_set_anon(folio, vma, address, - !!(flags & RMAP_EXCLUSIVE)); - } else if (likely(!folio_test_ksm(folio))) { + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); + + if (likely(!folio_test_ksm(folio))) __page_check_anon_rmap(folio, page, vma, address); - } __folio_mod_stat(folio, nr, nr_pmdmapped); -- 2.34.1