Re: [PATCH] mm: convert partially_mapped set/clear operations to be atomic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12.12.24 14:54, Usama Arif wrote:
Other page flags in the 2nd page, like PG_hwpoison and
PG_anon_exclusive can get modified concurrently. Hence,
partially_mapped flags need to be changed atomically.

Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
Reported-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Usama Arif <usamaarif642@xxxxxxxxx>

Fortunately we have the test-before-set checks already in place.

Acked-by: David Hildenbrand <david@xxxxxxxxxx>

---
  include/linux/page-flags.h | 12 ++----------
  mm/huge_memory.c           |  8 ++++----
  2 files changed, 6 insertions(+), 14 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index cf46ac720802..691506bdf2c5 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -862,18 +862,10 @@ static inline void ClearPageCompound(struct page *page)
  	ClearPageHead(page);
  }
  FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
-FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
-/*
- * PG_partially_mapped is protected by deferred_split split_queue_lock,
- * so its safe to use non-atomic set/clear.
- */
-__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
-__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
  #else
  FOLIO_FLAG_FALSE(large_rmappable)
-FOLIO_TEST_FLAG_FALSE(partially_mapped)
-__FOLIO_SET_FLAG_NOOP(partially_mapped)
-__FOLIO_CLEAR_FLAG_NOOP(partially_mapped)
+FOLIO_FLAG_FALSE(partially_mapped)
  #endif
#define PG_head_mask ((1UL << PG_head))
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2da5520bfe24..120cd2cdc614 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3583,7 +3583,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
  		    !list_empty(&folio->_deferred_list)) {
  			ds_queue->split_queue_len--;
  			if (folio_test_partially_mapped(folio)) {
-				__folio_clear_partially_mapped(folio);
+				folio_clear_partially_mapped(folio);
  				mod_mthp_stat(folio_order(folio),
  					      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
  			}
@@ -3695,7 +3695,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio)
  	if (!list_empty(&folio->_deferred_list)) {
  		ds_queue->split_queue_len--;
  		if (folio_test_partially_mapped(folio)) {
-			__folio_clear_partially_mapped(folio);
+			folio_clear_partially_mapped(folio);
  			mod_mthp_stat(folio_order(folio),
  				      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
  		}
@@ -3739,7 +3739,7 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
  	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
  	if (partially_mapped) {
  		if (!folio_test_partially_mapped(folio)) {
-			__folio_set_partially_mapped(folio);
+			folio_set_partially_mapped(folio);
  			if (folio_test_pmd_mappable(folio))
  				count_vm_event(THP_DEFERRED_SPLIT_PAGE);
  			count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
@@ -3832,7 +3832,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
  		} else {
  			/* We lost race with folio_put() */
  			if (folio_test_partially_mapped(folio)) {
-				__folio_clear_partially_mapped(folio);
+				folio_clear_partially_mapped(folio);
  				mod_mthp_stat(folio_order(folio),
  					      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
  			}


--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux