On 2024/8/2 4:23, David Hildenbrand wrote:
On 25.07.24 03:16, Kefeng Wang wrote:
Move isolate_hugetlb() after grab a reference, and use the
isolate_folio_to_list() to unify hugetlb/LRU/non-LRU folio
isolation, which cleanup code a bit and save a few calls to
compound_head().
Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
---
mm/memory_hotplug.c | 48 +++++++++++++++------------------------------
1 file changed, 16 insertions(+), 32 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index ccaf4c480aed..057037766efa 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1773,20 +1773,17 @@ static int scan_movable_pages(unsigned long
start, unsigned long end,
static void do_migrate_range(unsigned long start_pfn, unsigned long
end_pfn)
{
unsigned long pfn;
- struct page *page, *head;
LIST_HEAD(source);
+ struct folio *folio;
static DEFINE_RATELIMIT_STATE(migrate_rs,
DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
- struct folio *folio;
- bool isolated;
+ struct page *page;
if (!pfn_valid(pfn))
continue;
page = pfn_to_page(pfn);
- folio = page_folio(page);
- head = &folio->page;
/*
* HWPoison pages have elevated reference counts so the
migration would
@@ -1808,36 +1805,22 @@ static void do_migrate_range(unsigned long
start_pfn, unsigned long end_pfn)
continue;
}
- if (PageHuge(page)) {
- pfn = page_to_pfn(head) + compound_nr(head) - 1;
- isolate_hugetlb(folio, &source);
+ folio = folio_get_nontail_page(page);
+ if (!folio)
continue;
There is one interesting case: 1 GiB hugetlb folios can span multiple
memory blocks (e.g., 128 MiB). Offlining individual blocks must work.
If you do the folio_get_nontail_page() we'd not be able to offline a
memory block in the middle anymore, because we'd never try even
isolating it.
Indeed, will test this case.
So likely we have to try getting the head page of a large folio instead
(as a fallback if this fails?) and continue from there.
In case of an free hugetlb tail page we would now iterate each
individual page instead of simply jumping forward like the old code
would have done. I think we want to maintain that behavior as well?
Yes, only can occur when the first hugetlb page if start_pfn is not head
page, will reconsider of this part
- } else if (PageTransHuge(page))
- pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
-
- if (!get_page_unless_zero(page))
- continue;
- /*
- * We can skip free pages. And we can deal with pages on
- * LRU and non-lru movable pages.
- */
- if (PageLRU(page))
- isolated = isolate_lru_page(page);
- else
- isolated = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
- if (isolated) {
- list_add_tail(&page->lru, &source);
- if (!__PageMovable(page))
- inc_node_page_state(page, NR_ISOLATED_ANON +
- page_is_file_lru(page));
- } else {
+ /* Skip free folios, deal with hugetlb, LRU and non-lru
movable folios. */
Can you clarify what "skip free folios" means? For free folios the
folio_get_nontail_page() shouldn't have succeeded. Did you mean if the
folio got freed in the meantime?
I think we could drop the comments here, the original comment is added
by commit 0c0e61958965 ("memory unplug: page offline"), and there is no
increase the reference to page, but with commit 700c2a46e882 ("mem-
hotplug: call isolate_lru_page with elevated refcount"), the comment
could be dropped since the folio can't be freed here.
+ if (!isolate_folio_to_list(folio, &source)) {
if (__ratelimit(&migrate_rs)) {
pr_warn("failed to isolate pfn %lx\n", pfn);
dump_page(page, "isolation failed");
}
}