Hi,
On 08/02/2018 09:23 AM, Christopher Lameter wrote:
On Wed, 1 Aug 2018, Jeremy Linton wrote:
diff --git a/mm/slub.c b/mm/slub.c
index 51258eff4178..e03719bac1e2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2519,6 +2519,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
if (unlikely(!node_match(page, searchnode))) {
stat(s, ALLOC_NODE_MISMATCH);
deactivate_slab(s, page, c->freelist, c);
+ if (!node_online(searchnode))
+ node = NUMA_NO_NODE;
goto new_slab;
}
}
Would it not be better to implement this check in the page allocator?
There is also the issue of how to fallback to the nearest node.
Possibly? Falling back to the nearest node though, should be handled if
memory-less nodes is enabled, which in the problematic case isn't.
NUMA_NO_NODE should fallback to the current memory allocation policy but
it seems by inserting it here you would end up just with the default node
for the processor.
I picked this spot (compared to 2/2) because a number of paths are
funneling through here, and in this case it shouldn't be a very hot path.