mm: always initialise folio->_deferred_list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx>

commit b7b098cf00a2b65d5654a86dc8edf82f125289c1 upstream.

Patch series "Various significant MM patches".

These patches all interact in annoying ways which make it tricky to send
them out in any way other than a big batch, even though there's not really
an overarching theme to connect them.

The big effects of this patch series are:

 - folio_test_hugetlb() becomes reliable, even when called without a
   page reference
 - We free up PG_slab, and we could always use more page flags
 - We no longer need to check PageSlab before calling page_mapcount()

This patch (of 9):

For compound pages which are at least order-2 (and hence have a
deferred_list), initialise it and then we can check at free that the page
is not part of a deferred list.  We recently found this useful to rule out
a source of corruption.

[peterx@xxxxxxxxxx: always initialise folio->_deferred_list]
  Link: https://lkml.kernel.org/r/20240417211836.2742593-2-peterx@xxxxxxxxxx
Link: https://lkml.kernel.org/r/20240321142448.1645400-1-willy@xxxxxxxxxxxxx
Link: https://lkml.kernel.org/r/20240321142448.1645400-2-willy@xxxxxxxxxxxxx
Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
Reviewed-by: David Hildenbrand <david@xxxxxxxxxx>
Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx>
Cc: Muchun Song <muchun.song@xxxxxxxxx>
Cc: Oscar Salvador <osalvador@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
[ Include three small changes from the upstream commit, for backport safety:
  replace list_del() by list_del_init() in split_huge_page_to_list(),
  like c010d47f107f ("mm: thp: split huge page to any lower order pages");
  replace list_del() by list_del_init() in folio_undo_large_rmappable(), like
  9bcef5973e31 ("mm: memcg: fix split queue list crash when large folio migration");
  keep __free_pages() instead of folio_put() in __update_and_free_hugetlb_folio(). ]
Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
 mm/huge_memory.c |    6 ++----
 mm/hugetlb.c     |    1 +
 mm/internal.h    |    2 ++
 mm/memcontrol.c  |    3 +++
 mm/page_alloc.c  |    9 +++++----
 5 files changed, 13 insertions(+), 8 deletions(-)

--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -571,8 +571,6 @@ void folio_prep_large_rmappable(struct f
 {
 	if (!folio || !folio_test_large(folio))
 		return;
-	if (folio_order(folio) > 1)
-		INIT_LIST_HEAD(&folio->_deferred_list);
 	folio_set_large_rmappable(folio);
 }
 
@@ -2725,7 +2723,7 @@ int split_huge_page_to_list(struct page
 		if (folio_order(folio) > 1 &&
 		    !list_empty(&folio->_deferred_list)) {
 			ds_queue->split_queue_len--;
-			list_del(&folio->_deferred_list);
+			list_del_init(&folio->_deferred_list);
 		}
 		spin_unlock(&ds_queue->split_queue_lock);
 		if (mapping) {
@@ -2789,7 +2787,7 @@ void folio_undo_large_rmappable(struct f
 	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
 	if (!list_empty(&folio->_deferred_list)) {
 		ds_queue->split_queue_len--;
-		list_del(&folio->_deferred_list);
+		list_del_init(&folio->_deferred_list);
 	}
 	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
 }
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1795,6 +1795,7 @@ static void __update_and_free_hugetlb_fo
 		destroy_compound_gigantic_folio(folio, huge_page_order(h));
 		free_gigantic_folio(folio, huge_page_order(h));
 	} else {
+		INIT_LIST_HEAD(&folio->_deferred_list);
 		__free_pages(&folio->page, huge_page_order(h));
 	}
 }
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -431,6 +431,8 @@ static inline void prep_compound_head(st
 	atomic_set(&folio->_entire_mapcount, -1);
 	atomic_set(&folio->_nr_pages_mapped, 0);
 	atomic_set(&folio->_pincount, 0);
+	if (order > 1)
+		INIT_LIST_HEAD(&folio->_deferred_list);
 }
 
 static inline void prep_compound_tail(struct page *head, int tail_idx)
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -7153,6 +7153,9 @@ static void uncharge_folio(struct folio
 	struct obj_cgroup *objcg;
 
 	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
+	VM_BUG_ON_FOLIO(folio_order(folio) > 1 &&
+			!folio_test_hugetlb(folio) &&
+			!list_empty(&folio->_deferred_list), folio);
 
 	/*
 	 * Nobody should be changing or seriously looking at
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1002,10 +1002,11 @@ static int free_tail_page_prepare(struct
 		}
 		break;
 	case 2:
-		/*
-		 * the second tail page: ->mapping is
-		 * deferred_list.next -- ignore value.
-		 */
+		/* the second tail page: deferred_list overlaps ->mapping */
+		if (unlikely(!list_empty(&folio->_deferred_list))) {
+			bad_page(page, "on deferred list");
+			goto out;
+		}
 		break;
 	default:
 		if (page->mapping != TAIL_MAPPING) {


Patches currently in stable-queue which might be from willy@xxxxxxxxxxxxx are

queue-6.6/mm-support-order-1-folios-in-the-page-cache.patch
queue-6.6/mm-always-initialise-folio-_deferred_list.patch
queue-6.6/mm-refactor-folio_undo_large_rmappable.patch
queue-6.6/mm-thp-fix-deferred-split-unqueue-naming-and-locking.patch
queue-6.6/mm-add-page_rmappable_folio-wrapper.patch
queue-6.6/mm-readahead-do-not-allow-order-1-folio.patch




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux