From: Shakeel Butt <shakeelb@xxxxxxxxxx> Subject: slab, slub: skip unnecessary kasan_cache_shutdown() The kasan quarantine is designed to delay freeing slab objects to catch use-after-free. The quarantine can be large (several percent of machine memory size). When kmem_caches are deleted related objects are flushed from the quarantine but this requires scanning the entire quarantine which can be very slow. We have seen the kernel busily working on this while holding slab_mutex and badly affecting cache_reaper, slabinfo readers and memcg kmem cache creations. It can easily reproduced by following script: yes . | head -1000000 | xargs stat > /dev/null for i in `seq 1 10`; do seq 500 | (cd /cg/memory && xargs mkdir) seq 500 | xargs -I{} sh -c 'echo $BASHPID > \ /cg/memory/{}/tasks && exec stat .' > /dev/null seq 500 | (cd /cg/memory && xargs rmdir) done The busy stack: kasan_cache_shutdown shutdown_cache memcg_destroy_kmem_caches mem_cgroup_css_free css_free_rwork_fn process_one_work worker_thread kthread ret_from_fork This patch is based on the observation that if the kmem_cache to be destroyed is empty then there should not be any objects of this cache in the quarantine. Without the patch the script got stuck for couple of hours. With the patch the script completed within a second. Link: http://lkml.kernel.org/r/20180327230603.54721-1-shakeelb@xxxxxxxxxx Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Acked-by: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Acked-by: Christoph Lameter <cl@xxxxxxxxx> Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Greg Thelen <gthelen@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/kasan/kasan.c | 3 ++- mm/slab.c | 12 ++++++++++++ mm/slab.h | 1 + mm/slub.c | 11 +++++++++++ 4 files changed, 26 insertions(+), 1 deletion(-) diff -puN mm/kasan/kasan.c~slab-slub-skip-unnecessary-kasan_cache_shutdown mm/kasan/kasan.c --- a/mm/kasan/kasan.c~slab-slub-skip-unnecessary-kasan_cache_shutdown +++ a/mm/kasan/kasan.c @@ -382,7 +382,8 @@ void kasan_cache_shrink(struct kmem_cach void kasan_cache_shutdown(struct kmem_cache *cache) { - quarantine_remove_cache(cache); + if (!__kmem_cache_empty(cache)) + quarantine_remove_cache(cache); } size_t kasan_metadata_size(struct kmem_cache *cache) diff -puN mm/slab.c~slab-slub-skip-unnecessary-kasan_cache_shutdown mm/slab.c --- a/mm/slab.c~slab-slub-skip-unnecessary-kasan_cache_shutdown +++ a/mm/slab.c @@ -2291,6 +2291,18 @@ out: return nr_freed; } +bool __kmem_cache_empty(struct kmem_cache *s) +{ + int node; + struct kmem_cache_node *n; + + for_each_kmem_cache_node(s, node, n) + if (!list_empty(&n->slabs_full) || + !list_empty(&n->slabs_partial)) + return false; + return true; +} + int __kmem_cache_shrink(struct kmem_cache *cachep) { int ret = 0; diff -puN mm/slab.h~slab-slub-skip-unnecessary-kasan_cache_shutdown mm/slab.h --- a/mm/slab.h~slab-slub-skip-unnecessary-kasan_cache_shutdown +++ a/mm/slab.h @@ -166,6 +166,7 @@ static inline slab_flags_t kmem_cache_fl SLAB_TEMPORARY | \ SLAB_ACCOUNT) +bool __kmem_cache_empty(struct kmem_cache *); int __kmem_cache_shutdown(struct kmem_cache *); void __kmem_cache_release(struct kmem_cache *); int __kmem_cache_shrink(struct kmem_cache *); diff -puN mm/slub.c~slab-slub-skip-unnecessary-kasan_cache_shutdown mm/slub.c --- a/mm/slub.c~slab-slub-skip-unnecessary-kasan_cache_shutdown +++ a/mm/slub.c @@ -3696,6 +3696,17 @@ static void free_partial(struct kmem_cac discard_slab(s, page); } +bool __kmem_cache_empty(struct kmem_cache *s) +{ + int node; + struct kmem_cache_node *n; + + for_each_kmem_cache_node(s, node, n) + if (n->nr_partial || slabs_node(s, node)) + return false; + return true; +} + /* * Release all resources used by a slab cache. */ _ -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html