[PATCH] mm: don't convert the page to folio before splitting in split_huge_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Pankaj Raghav <p.raghav@xxxxxxxxxxx>

Sven reported that a commit from bs > ps series was breaking the ksm ltp
test[1].

split_huge_page() takes precisely a page that is locked, and it also
expects the folio that contains that page to be locked after that
huge page has been split. The changes introduced converted the page to
folio, and passed the head page to be split, which might not be locked,
resulting in a kernel panic.

This commit fixes it by always passing the correct page to be split from
split_huge_page() with the appropriate minimum order for splitting.

[1] https://lore.kernel.org/linux-xfs/yt9dttf3r49e.fsf@xxxxxxxxxxxxx/
Reported-by: Sven Schnelle <svens@xxxxxxxxxxxxx>
Fixes: fd031210c9ce ("mm: split a folio in minimum folio order chunks")
Signed-off-by: Pankaj Raghav <p.raghav@xxxxxxxxxxx>
---
This applies to the vfs.blocksize branch on the vfs tree.

@Christian, Stephen already sent a mail saying that there is a conflict
with these changes and mm-unstable. For now, I have based these patches
out of your tree. Let me know if you need the same patch based on
linux-next.

 include/linux/huge_mm.h | 16 +++++++++++++++-
 mm/huge_memory.c        | 21 +++++++++++++--------
 2 files changed, 28 insertions(+), 9 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 7c50aeed0522..7a570e0437c9 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -319,10 +319,24 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
 bool can_split_folio(struct folio *folio, int *pextra_pins);
 int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 		unsigned int new_order);
+int min_order_for_split(struct folio *folio);
 int split_folio_to_list(struct folio *folio, struct list_head *list);
 static inline int split_huge_page(struct page *page)
 {
-	return split_folio(page_folio(page));
+	struct folio *folio = page_folio(page);
+	int ret = min_order_for_split(folio);
+
+	if (ret < 0)
+		return ret;
+
+	/*
+	 * split_huge_page() locks the page before splitting and
+	 * expects the same page that has been split to be locked when
+	 * returned. split_folio(page_folio(page)) cannot be used here
+	 * because it converts the page to folio and passes the head
+	 * page to be split.
+	 */
+	return split_huge_page_to_list_to_order(page, NULL, ret);
 }
 void deferred_split_folio(struct folio *folio);
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c29af9451d92..9931ff1d9a9d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3297,12 +3297,10 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 	return ret;
 }
 
-int split_folio_to_list(struct folio *folio, struct list_head *list)
+int min_order_for_split(struct folio *folio)
 {
-	unsigned int min_order = 0;
-
 	if (folio_test_anon(folio))
-		goto out;
+		return 0;
 
 	if (!folio->mapping) {
 		if (folio_test_pmd_mappable(folio))
@@ -3310,10 +3308,17 @@ int split_folio_to_list(struct folio *folio, struct list_head *list)
 		return -EBUSY;
 	}
 
-	min_order = mapping_min_folio_order(folio->mapping);
-out:
-	return split_huge_page_to_list_to_order(&folio->page, list,
-							min_order);
+	return mapping_min_folio_order(folio->mapping);
+}
+
+int split_folio_to_list(struct folio *folio, struct list_head *list)
+{
+	int ret = min_order_for_split(folio);
+
+	if (ret < 0)
+		return ret;
+
+	return split_huge_page_to_list_to_order(&folio->page, list, ret);
 }
 
 void __folio_undo_large_rmappable(struct folio *folio)
-- 
2.44.1





[Index of Archives]     [Linux Kernel]     [Linux USB Development]     [Yosemite News]     [Linux SCSI]

  Powered by Linux