Re: [PATCH] mm/memory-failure: remove shake_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/26/24 10:34 AM, Matthew Wilcox wrote:
On Fri, Apr 26, 2024 at 10:15:11AM -0700, Sidhartha Kumar wrote:
Use a folio in get_any_page() to save 5 calls to compound head and
convert the last user of shake_page() to shake_folio(). This allows us
to remove the shake_page() definition.

So I didn't do this before because I wasn't convinced it was safe.
We don't have a refcount on the folio, so the page might no longer
be part of this folio by the time we get the refcount on the folio.

I'd really like to see some argumentation for why this is safe.

If I moved down the folio = page_folio() line to after we verify __get_hwpoison_page() has returned 1, which indicates the reference count was successfully incremented via foliO_try_get(), that means the folio conversion would happen after we have a refcount. In the case we don't call __get_hwpoison_page(), that means the MF_COUNT_INCREASED flag is set. This means the page has existing users so that path would be safe as well. So I think this is safe after moving page_folio() after __get_hwpoison_page().

Does that seem correct?

Thanks,
Sid





Signed-off-by: Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx>
---
  mm/memory-failure.c | 20 ++++++++------------
  1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 16ada4fb02b79..273f6fef29f25 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -385,11 +385,6 @@ void shake_folio(struct folio *folio)
  }
  EXPORT_SYMBOL_GPL(shake_folio);
-static void shake_page(struct page *page)
-{
-	shake_folio(page_folio(page));
-}
-
  static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
  		unsigned long address)
  {
@@ -1433,6 +1428,7 @@ static int get_any_page(struct page *p, unsigned long flags)
  {
  	int ret = 0, pass = 0;
  	bool count_increased = false;
+	struct folio *folio = page_folio(p);
if (flags & MF_COUNT_INCREASED)
  		count_increased = true;
@@ -1446,7 +1442,7 @@ static int get_any_page(struct page *p, unsigned long flags)
  				if (pass++ < 3)
  					goto try_again;
  				ret = -EBUSY;
-			} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
+			} else if (!folio_test_hugetlb(folio) && !is_free_buddy_page(p)) {
  				/* We raced with put_page, retry. */
  				if (pass++ < 3)
  					goto try_again;
@@ -1459,7 +1455,7 @@ static int get_any_page(struct page *p, unsigned long flags)
  			 * page, retry.
  			 */
  			if (pass++ < 3) {
-				shake_page(p);
+				shake_folio(folio);
  				goto try_again;
  			}
  			ret = -EIO;
@@ -1467,7 +1463,7 @@ static int get_any_page(struct page *p, unsigned long flags)
  		}
  	}
- if (PageHuge(p) || HWPoisonHandlable(p, flags)) {
+	if (folio_test_hugetlb(folio) || HWPoisonHandlable(p, flags)) {
  		ret = 1;
  	} else {
  		/*
@@ -1475,12 +1471,12 @@ static int get_any_page(struct page *p, unsigned long flags)
  		 * it into something we can handle.
  		 */
  		if (pass++ < 3) {
-			put_page(p);
-			shake_page(p);
+			folio_put(folio);
+			shake_folio(folio);
  			count_increased = false;
  			goto try_again;
  		}
-		put_page(p);
+		folio_put(folio);
  		ret = -EIO;
  	}
  out:
@@ -1643,7 +1639,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
/*
  	 * try_to_unmap() might put mlocked page in lru cache, so call
-	 * shake_page() again to ensure that it's flushed.
+	 * shake_folio() again to ensure that it's flushed.
  	 */
  	if (mlocked)
  		shake_folio(folio);
--
2.44.0







[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux