On 13.06.24 02:07, Barry Song wrote:
From: Barry Song <v-songbaohua@xxxxxxxx> For the !folio_test_anon(folio) case, we can now invoke folio_add_new_anon_rmap() with the rmap flags set to either EXCLUSIVE or non-EXCLUSIVE. This action will suppress the VM_WARN_ON_FOLIO check within __folio_add_anon_rmap() while initiating the process of bringing up mTHP swapin. static __always_inline void __folio_add_anon_rmap(struct folio *folio, struct page *page, int nr_pages, struct vm_area_struct *vma, unsigned long address, rmap_t flags, enum rmap_level level) { ... if (unlikely(!folio_test_anon(folio))) { VM_WARN_ON_FOLIO(folio_test_large(folio) && level != RMAP_LEVEL_PMD, folio); } ... } It also enhances the code’s readability. Suggested-by: David Hildenbrand <david@xxxxxxxxxx> Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> --- mm/memory.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 2f94921091fb..9c962f62f928 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4339,6 +4339,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(folio != swapcache && swapcache)) { folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); + } else if (!folio_test_anon(folio)) { + folio_add_new_anon_rmap(folio, vma, address, rmap_flags);
So, with large folio swapin, we would never end up here if any swp PTE is !exclusive, because we would make sure to allocate a large folio only for suitable regions, correct?
It can certainly happen during swapout + refault with large folios. But there, we will always have folio_test_anon() still set and wouldn't run into this code path.
(it will all be better with per-folio PAE bit, but that will take some more time until I actually get to implement it properly, handling all the nasty corner cases)
Better add a comment regarding why we are sure that the complete thing is exclusive (e.g., currently only small folios).
} else { folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address, rmap_flags);
-- Cheers, David / dhildenb