Re: [PATCH 4/4] mm: memory_hotplug: unify Huge/LRU/non-LRU movable folio isolation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25.07.24 03:16, Kefeng Wang wrote:
Move isolate_hugetlb() after grab a reference, and use the
isolate_folio_to_list() to unify hugetlb/LRU/non-LRU folio
isolation, which cleanup code a bit and save a few calls to
compound_head().

Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
---
  mm/memory_hotplug.c | 48 +++++++++++++++------------------------------
  1 file changed, 16 insertions(+), 32 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index ccaf4c480aed..057037766efa 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1773,20 +1773,17 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
  static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
  {
  	unsigned long pfn;
-	struct page *page, *head;
  	LIST_HEAD(source);
+	struct folio *folio;
  	static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
  				      DEFAULT_RATELIMIT_BURST);
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
-		struct folio *folio;
-		bool isolated;
+		struct page *page;
if (!pfn_valid(pfn))
  			continue;
  		page = pfn_to_page(pfn);
-		folio = page_folio(page);
-		head = &folio->page;
/*
  		 * HWPoison pages have elevated reference counts so the migration would
@@ -1808,36 +1805,22 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
  			continue;
  		}
- if (PageHuge(page)) {
-			pfn = page_to_pfn(head) + compound_nr(head) - 1;
-			isolate_hugetlb(folio, &source);
+		folio = folio_get_nontail_page(page);
+		if (!folio)
  			continue;

There is one interesting case: 1 GiB hugetlb folios can span multiple memory blocks (e.g., 128 MiB). Offlining individual blocks must work.

If you do the folio_get_nontail_page() we'd not be able to offline a memory block in the middle anymore, because we'd never try even isolating it.

So likely we have to try getting the head page of a large folio instead (as a fallback if this fails?) and continue from there.

In case of an free hugetlb tail page we would now iterate each individual page instead of simply jumping forward like the old code would have done. I think we want to maintain that behavior as well?

-		} else if (PageTransHuge(page))
-			pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
-
-		if (!get_page_unless_zero(page))
-			continue;
-		/*
-		 * We can skip free pages. And we can deal with pages on
-		 * LRU and non-lru movable pages.
-		 */
-		if (PageLRU(page))
-			isolated = isolate_lru_page(page);
-		else
-			isolated = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
-		if (isolated) {
-			list_add_tail(&page->lru, &source);
-			if (!__PageMovable(page))
-				inc_node_page_state(page, NR_ISOLATED_ANON +
-						    page_is_file_lru(page));
- } else {
+		/* Skip free folios, deal with hugetlb, LRU and non-lru movable folios. */

Can you clarify what "skip free folios" means? For free folios the folio_get_nontail_page() shouldn't have succeeded. Did you mean if the folio got freed in the meantime?

+		if (!isolate_folio_to_list(folio, &source)) {
  			if (__ratelimit(&migrate_rs)) {
  				pr_warn("failed to isolate pfn %lx\n", pfn);
  				dump_page(page, "isolation failed");
  			}
  		}


--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux