Re: [PATCH v2 2/3] mm: use folio_add_new_anon_rmap() if folio_test_anon(folio)==false

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18.06.24 01:11, Barry Song wrote:
From: Barry Song <v-songbaohua@xxxxxxxx>

For the !folio_test_anon(folio) case, we can now invoke folio_add_new_anon_rmap()
with the rmap flags set to either EXCLUSIVE or non-EXCLUSIVE. This action will
suppress the VM_WARN_ON_FOLIO check within __folio_add_anon_rmap() while initiating
the process of bringing up mTHP swapin.

  static __always_inline void __folio_add_anon_rmap(struct folio *folio,
                  struct page *page, int nr_pages, struct vm_area_struct *vma,
                  unsigned long address, rmap_t flags, enum rmap_level level)
  {
          ...
          if (unlikely(!folio_test_anon(folio))) {
                  VM_WARN_ON_FOLIO(folio_test_large(folio) &&
                                   level != RMAP_LEVEL_PMD, folio);
          }
          ...
  }

It also improves the code’s readability. Currently, all new anonymous
folios calling folio_add_anon_rmap_ptes() are order-0. This ensures
that new folios cannot be partially exclusive; they are either entirely
exclusive or entirely shared.

Suggested-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx>
Tested-by: Shuai Yuan <yuanshuai@xxxxxxxx>
---
  mm/memory.c   |  8 ++++++++
  mm/swapfile.c | 13 +++++++++++--
  2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 1f24ecdafe05..620654c13b2f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4339,6 +4339,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
  	if (unlikely(folio != swapcache && swapcache)) {
  		folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
  		folio_add_lru_vma(folio, vma);
+	} else if (!folio_test_anon(folio)) {
+		/*
+		 * We currently only expect small !anon folios, for which we now
+		 * that they are either fully exclusive or fully shared. If we
+		 * ever get large folios here, we have to be careful.
+		 */
+		VM_WARN_ON_ONCE(folio_test_large(folio));
+		folio_add_new_anon_rmap(folio, vma, address, rmap_flags);
  	} else {
  		folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address,
  					rmap_flags);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index ae1d2700f6a3..69efa1a57087 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1908,8 +1908,17 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
  		VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio);
  		if (pte_swp_exclusive(old_pte))
  			rmap_flags |= RMAP_EXCLUSIVE;
-
-		folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags);
+		/*
+		 * We currently only expect small !anon folios, for which we now that
+		 * they are either fully exclusive or fully shared. If we ever get
+		 * large folios here, we have to be careful.
+		 */
+		if (!folio_test_anon(folio)) {
+			VM_WARN_ON_ONCE(folio_test_large(folio));

(comment applies to both cases)

Thinking about Hugh's comment, we should likely add here:

VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);

[the check we are removing from __folio_add_anon_rmap()]

and document for folio_add_new_anon_rmap() in patch #1, that when dealing with folios that might be mapped concurrently by others, the folio lock must be held.

+			folio_add_new_anon_rmap(folio, vma, addr, rmap_flags);
+		} else {
+			folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags);
+		}
  	} else { /* ksm created a completely new copy */
  		folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
  		folio_add_lru_vma(folio, vma);

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux