This is a note to let you know that I've just added the patch titled ovs: do not allocate memory from offline numa node to the 4.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: ovs-do-not-allocate-memory-from-offline-numa-node.patch and it can be found in the queue-4.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From foo@baz Thu Oct 22 17:25:37 PDT 2015 From: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx> Date: Fri, 2 Oct 2015 13:18:22 +0300 Subject: ovs: do not allocate memory from offline numa node From: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx> [ Upstream commit 598c12d0ba6de9060f04999746eb1e015774044b ] When openvswitch tries allocate memory from offline numa node 0: stats = kmem_cache_alloc_node(flow_stats_cache, GFP_KERNEL | __GFP_ZERO, 0) It catches VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES || !node_online(nid)) [ replaced with VM_WARN_ON(!node_online(nid)) recently ] in linux/gfp.h This patch disables numa affinity in this case. Signed-off-by: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx> Acked-by: Pravin B Shelar <pshelar@xxxxxxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- net/openvswitch/flow_table.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -92,7 +92,8 @@ struct sw_flow *ovs_flow_alloc(void) /* Initialize the default stat node. */ stats = kmem_cache_alloc_node(flow_stats_cache, - GFP_KERNEL | __GFP_ZERO, 0); + GFP_KERNEL | __GFP_ZERO, + node_online(0) ? 0 : NUMA_NO_NODE); if (!stats) goto err; Patches currently in stable-queue which might be from khlebnikov@xxxxxxxxxxxxxx are queue-4.1/ovs-do-not-allocate-memory-from-offline-numa-node.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html