If current alloc context does not have __GFP_MEMALLOC in its gfpflags, then slab objects that were previously created with __GFP_MEMALLOC should not be given. This criteria is well kept in slab alloc slowpath: When gfpflags does not contain __GFP_MEMALLOC but if per-cpu slab page was allocated with __GFP_MEMALLOC, then allocator first deactivates per-cpu slab page and then again allocates new slab page with the current context's gfpflags. However, this criteria is not checked in fastpath. It should also be checked in the fastpath, too. Signed-off-by: Ohhoon Kwon <ohkwon1043@xxxxxxxxx> --- mm/slub.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 74d92aa4a3a2..c77cd548e106 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3179,7 +3179,8 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l * there is a suitable cpu freelist. */ if (IS_ENABLED(CONFIG_PREEMPT_RT) || - unlikely(!object || !slab || !node_match(slab, node))) { + unlikely(!object || !slab || !node_match(slab, node) || + !pfmemalloc_match(slab, gfpflags))) { object = __slab_alloc(s, gfpflags, node, addr, c); } else { void *next_object = get_freepointer_safe(s, object); -- 2.25.1