[patch 01/24] mm, memory_hotplug: don't bail out in do_migrate_range() prematurely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Oscar Salvador <osalvador@xxxxxxx>
Subject: mm, memory_hotplug: don't bail out in do_migrate_range() prematurely

do_migrate_range() takes a memory range and tries to isolate the pages
to put them into a list.  This list will be later on used in
migrate_pages() to know the pages we need to migrate.

Currently, if we fail to isolate a single page, we put all already
isolated pages back to their LRU and we bail out from the function.  This
is quite suboptimal, as this will force us to start over again because
scan_movable_pages will give us the same range.  If there is no chance
that we can isolate that page, we will loop here forever.

Issue debugged in [1] has proved that.  During the debugging of that
issue, it was noticed that if do_migrate_ranges() fails to isolate a
single page, we will just discard the work we have done so far and bail
out, which means that scan_movable_pages() will find again the same set of
pages.

Instead, we can just skip the error, keep isolating as much pages as
possible and then proceed with the call to migrate_pages().

This will allow us to do as much work as possible at once.

[1] https://lkml.org/lkml/2018/12/6/324

Michal said:

: I still think that this doesn't give us a whole picture.  Looping for
: ever is a bug.  Failing the isolation is quite possible and it should
: be a ephemeral condition (e.g.  a race with freeing the page or
: somebody else isolating the page for whatever reason).  And here comes
: the disadvantage of the current implementation.  We simply throw
: everything on the floor just because of a ephemeral condition.  The
: racy page_count check is quite dubious to prevent from that.

Link: http://lkml.kernel.org/r/20181211135312.27034-1-osalvador@xxxxxxx
Signed-off-by: Oscar Salvador <osalvador@xxxxxxx>
Acked-by: Michal Hocko <mhocko@xxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
Cc: Jan Kara <jack@xxxxxxx>
Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: William Kucharski <william.kucharski@xxxxxxxxxx>
Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/memory_hotplug.c |   18 ++----------------
 1 file changed, 2 insertions(+), 16 deletions(-)

--- a/mm/memory_hotplug.c~mm-memory_hotplug-dont-bail-out-in-do_migrate_range-prematurely
+++ a/mm/memory_hotplug.c
@@ -1344,7 +1344,6 @@ do_migrate_range(unsigned long start_pfn
 {
 	unsigned long pfn;
 	struct page *page;
-	int not_managed = 0;
 	int ret = 0;
 	LIST_HEAD(source);
 
@@ -1392,7 +1391,6 @@ do_migrate_range(unsigned long start_pfn
 		else
 			ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
 		if (!ret) { /* Success */
-			put_page(page);
 			list_add_tail(&page->lru, &source);
 			if (!__PageMovable(page))
 				inc_node_page_state(page, NR_ISOLATED_ANON +
@@ -1401,22 +1399,10 @@ do_migrate_range(unsigned long start_pfn
 		} else {
 			pr_warn("failed to isolate pfn %lx\n", pfn);
 			dump_page(page, "isolation failed");
-			put_page(page);
-			/* Because we don't have big zone->lock. we should
-			   check this again here. */
-			if (page_count(page)) {
-				not_managed++;
-				ret = -EBUSY;
-				break;
-			}
 		}
+		put_page(page);
 	}
 	if (!list_empty(&source)) {
-		if (not_managed) {
-			putback_movable_pages(&source);
-			goto out;
-		}
-
 		/* Allocate a new page from the nearest neighbor node */
 		ret = migrate_pages(&source, new_node_page, NULL, 0,
 					MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
@@ -1429,7 +1415,7 @@ do_migrate_range(unsigned long start_pfn
 			putback_movable_pages(&source);
 		}
 	}
-out:
+
 	return ret;
 }
 
_



[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux