As the document says new_page_nodemask() will try to allocate from a different node, but current behavior just do the opposite by passing current nid as preferred_nid to new_page_nodemask(). This patch pass next_memory_node as preferred_nid to new_page_nodemask() to fix it. Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> --- mm/memory_hotplug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 6910e0eea074..0c075aac0a81 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1335,7 +1335,7 @@ static struct page *new_node_page(struct page *page, unsigned long private) if (nodes_empty(nmask)) node_set(nid, nmask); - return new_page_nodemask(page, nid, &nmask); + return new_page_nodemask(page, next_memory_node(nid), &nmask); } #define NR_OFFLINE_AT_ONCE_PAGES (256) -- 2.15.1