On 2015/8/18 8:25, David Rientjes wrote: > On Mon, 17 Aug 2015, Jiang Liu wrote: > >> Function xpc_create_gru_mq_uv() allocates memory with __GFP_THISNODE >> flag set, which may cause permanent memory allocation failure on >> memoryless node. So replace cpu_to_node() with cpu_to_mem() to better >> support memoryless node. For node with memory, cpu_to_mem() is the same >> as cpu_to_node(). >> >> Signed-off-by: Jiang Liu <jiang.liu@xxxxxxxxxxxxxxx> >> --- >> drivers/misc/sgi-xp/xpc_uv.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/drivers/misc/sgi-xp/xpc_uv.c b/drivers/misc/sgi-xp/xpc_uv.c >> index 95c894482fdd..9210981c0d5b 100644 >> --- a/drivers/misc/sgi-xp/xpc_uv.c >> +++ b/drivers/misc/sgi-xp/xpc_uv.c >> @@ -238,7 +238,7 @@ xpc_create_gru_mq_uv(unsigned int mq_size, int cpu, char *irq_name, >> >> mq->mmr_blade = uv_cpu_to_blade_id(cpu); >> >> - nid = cpu_to_node(cpu); >> + nid = cpu_to_mem(cpu); >> page = alloc_pages_exact_node(nid, >> GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, >> pg_order); > > Why not simply fix build_zonelists_node() so that the __GFP_THISNODE > zonelists are set up to reference the zones of cpu_to_mem() for memoryless > nodes? > > It seems much better than checking and maintaining every __GFP_THISNODE > user to determine if they are using a memoryless node or not. I don't > feel that this solution is maintainable in the longterm. Hi David, There are some usage cases, such as memory migration, expect the page allocator rejecting memory allocation requests if there is no memory on local node. So we have: 1) alloc_pages_node(cpu_to_node(), __GFP_THISNODE) to only allocate memory from local node. 2) alloc_pages_node(cpu_to_mem(), __GFP_THISNODE) to allocate memory from local node or from nearest node if local node is memoryless. Not sure whether we could consolidate all callers specifying __GFP_THISNODE flag into one case, need more investigating here. Thanks! Gerry -- To unsubscribe from this list: send the line "unsubscribe linux-hotplug" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html