Since a dead memcg cache is destroyed only after the last slab allocated to it is freed, we must disable caching of empty slabs for such caches, otherwise they will be hanging around forever. This patch makes SLUB discard dead memcg caches' slabs as soon as they become empty. To achieve that, it disables per cpu partial lists for dead caches (see put_cpu_partial) and forbids keeping empty slabs on per node partial lists by setting cache's min_partial to 0 on kmem_cache_shrink, which is always called on memcg offline (see memcg_unregister_all_caches). Signed-off-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> Thanks-to: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> --- mm/slub.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 52565a9426ef..0d2d1978e62c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2064,6 +2064,14 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage); + + if (memcg_cache_dead(s)) { + unsigned long flags; + + local_irq_save(flags); + unfreeze_partials(s, this_cpu_ptr(s->cpu_slab)); + local_irq_restore(flags); + } #endif } @@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s) kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL); unsigned long flags; + if (memcg_cache_dead(s)) + s->min_partial = 0; + if (!slabs_by_inuse) { /* * Do not fail shrinking empty slabs if allocation of the -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>