There is an imbalance between when slab_pre_alloc_hook calls memcg_kmem_get_cache and when slab_post_alloc_hook calls memcg_kmem_put_cache. This can cause a memcg kmem cache to be destroyed right as an object from that cache is being allocated, which is probably not good. It could lead to things like a memcg allocating new kmalloc slabs instead of using freed space in old ones, maybe memory leaks, and maybe oopses as a memcg kmalloc slab is getting destroyed on one CPU while another CPU is trying to do an allocation from that same memcg. The obvious fix would be to use the same condition for calling memcg_kmem_put_cache that we also use to decide whether to call memcg_kmem_get_cache. I am not sure how long this bug has been around, since the last changeset to touch that code - 452647784b2f ("mm: memcontrol: cleanup kmem charge functions") - merely moved the bug from one location to another. I am still tagging that changeset, because the fix should automatically apply that far back. Signed-off-by: Rik van Riel <riel@xxxxxxxxxxx> Fixes: 452647784b2f ("mm: memcontrol: cleanup kmem charge functions") Cc: kernel-team@xxxxxx Cc: linux-mm@xxxxxxxxx Cc: stable@xxxxxxxxxxxxxxx Cc: Alexey Dobriyan <adobriyan@xxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Tejun Heo <tj@xxxxxxxxxx> --- mm/slab.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/slab.h b/mm/slab.h index 4190c24ef0e9..ab3d95bef8a0 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -444,7 +444,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, p[i] = kasan_slab_alloc(s, object, flags); } - if (memcg_kmem_enabled()) + if (memcg_kmem_enabled() && + ((flags & __GFP_ACCOUNT) || (s->flags & SLAB_ACCOUNT))) memcg_kmem_put_cache(s); } -- 2.17.1