Currently while allocating a slab for a offline node, we use its associated node_numa_mem to search for a partial slab. If we don't find a partial slab, we try allocating a slab from the offline node using __alloc_pages_node. However this is bound to fail. NIP [c00000000039a300] __alloc_pages_nodemask+0x130/0x3b0 LR [c00000000039a3c4] __alloc_pages_nodemask+0x1f4/0x3b0 Call Trace: [c0000008b36837f0] [c00000000039a3b4] __alloc_pages_nodemask+0x1e4/0x3b0 (unreliable) [c0000008b3683870] [c0000000003d1ff8] new_slab+0x128/0xcf0 [c0000008b3683950] [c0000000003d6060] ___slab_alloc+0x410/0x820 [c0000008b3683a40] [c0000000003d64a4] __slab_alloc+0x34/0x60 [c0000008b3683a70] [c0000000003d78b0] __kmalloc_node+0x110/0x490 [c0000008b3683af0] [c000000000343a08] kvmalloc_node+0x58/0x110 [c0000008b3683b30] [c0000000003ffd44] mem_cgroup_css_online+0x104/0x270 [c0000008b3683b90] [c000000000234e08] online_css+0x48/0xd0 [c0000008b3683bc0] [c00000000023dedc] cgroup_apply_control_enable+0x2ec/0x4d0 [c0000008b3683ca0] [c0000000002416f8] cgroup_mkdir+0x228/0x5f0 [c0000008b3683d10] [c000000000520360] kernfs_iop_mkdir+0x90/0xf0 [c0000008b3683d50] [c00000000043e400] vfs_mkdir+0x110/0x230 [c0000008b3683da0] [c000000000441ee0] do_mkdirat+0xb0/0x1a0 [c0000008b3683e20] [c00000000000b278] system_call+0x5c/0x68 Mitigate this by allocating the new slab from the node_numa_mem. Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: linux-mm@xxxxxxxxx Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Sachin Sant <sachinp@xxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Christopher Lameter <cl@xxxxxxxxx> Cc: linuxppc-dev@xxxxxxxxxxxxxxxx Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Srikar Dronamraju <srikar@xxxxxxxxxxxxxxxxxx> Cc: Bharata B Rao <bharata@xxxxxxxxxxxxx> Cc: Nathan Lynch <nathanl@xxxxxxxxxxxxx> Reported-by: Sachin Sant <sachinp@xxxxxxxxxxxxxxxxxx> Tested-by: Sachin Sant <sachinp@xxxxxxxxxxxxxxxxxx> Signed-off-by: Srikar Dronamraju <srikar@xxxxxxxxxxxxxxxxxx> --- Changelog v1 -> v2: - Handled comments from Vlastimil Babka - Now node gets set to node_numa_mem in new_slab_objects. mm/slub.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 1c55bf7892bf..2dc603a84290 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2475,6 +2475,9 @@ static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags, if (freelist) return freelist; + if (node != NUMA_NO_NODE && !node_present_pages(node)) + node = node_to_mem_node(node); + page = new_slab(s, flags, node); if (page) { c = raw_cpu_ptr(s->cpu_slab); @@ -2569,12 +2572,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, redo: if (unlikely(!node_match(page, node))) { - int searchnode = node; - if (node != NUMA_NO_NODE && !node_present_pages(node)) - searchnode = node_to_mem_node(node); + node = node_to_mem_node(node); - if (unlikely(!node_match(page, searchnode))) { + if (unlikely(!node_match(page, node))) { stat(s, ALLOC_NODE_MISMATCH); deactivate_slab(s, page, c->freelist, c); goto new_slab; -- 2.18.1