> On Apr 24, 2020, at 11:12 AM, Waiman Long <longman@xxxxxxxxxx> wrote: > > When the slub shrink sysfs file is written into, the function call > sequence is as follows: > > kernfs_fop_write > => slab_attr_store > => shrink_store > => kmem_cache_shrink_all > > It turns out that doing a memcg cache scan in kmem_cache_shrink_all() > is redundant as the same memcg cache scan is being done in > slab_attr_store(). So revert the commit 04f768a39d55 ("mm, slab: extend > slab/shrink to shrink all memcg caches") except the documentation change > which is still valid. BTW, currently, do, # echo 1 > /sys/kernel/slab/fs_cache/shrink would crash the kernel stack probably due to large amount of memcg caches. I am still figuring out if the above commit 04f768a39d55 is the culprit. [ 7938.979589][T106403] Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: __kmem_cache_create+0x7f8/0x800 [ 7938.979640][T106403] CPU: 80 PID: 106403 Comm: kworker/80:2 Not tainted 5.7.0-rc2-next-20200424 #5 [ 7938.979670][T106403] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func [ 7938.979708][T106403] Call Trace: [ 7938.979745][T106403] [c000200012e0f880] [c000000000716498] dump_stack+0xfc/0x174 (unreliable) [ 7938.979789][T106403] [c000200012e0f8d0] [c00000000010d7d0] panic+0x224/0x4d4 [ 7938.979816][T106403] [c000200012e0f970] [c00000000010d05c] __stack_chk_fail+0x2c/0x30 [ 7938.979865][T106403] [c000200012e0f9d0] [c0000000004b1fb8] __kmem_cache_create+0x7f8/0x800 [ 7938.979914][T106403] [c000200012e0faf0] [4320383d35334320] 0x4320383d35334320 > > Signed-off-by: Waiman Long <longman@xxxxxxxxxx> > --- > mm/slab.h | 1 - > mm/slab_common.c | 37 ------------------------------------- > mm/slub.c | 2 +- > 3 files changed, 1 insertion(+), 39 deletions(-) > > diff --git a/mm/slab.h b/mm/slab.h > index 207c83ef6e06..0937cb2ae8aa 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -237,7 +237,6 @@ int __kmem_cache_shrink(struct kmem_cache *); > void __kmemcg_cache_deactivate(struct kmem_cache *s); > void __kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s); > void slab_kmem_cache_release(struct kmem_cache *); > -void kmem_cache_shrink_all(struct kmem_cache *s); > > struct seq_file; > struct file; > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 23c7500eea7d..2e367ab8c15c 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -995,43 +995,6 @@ int kmem_cache_shrink(struct kmem_cache *cachep) > } > EXPORT_SYMBOL(kmem_cache_shrink); > > -/** > - * kmem_cache_shrink_all - shrink a cache and all memcg caches for root cache > - * @s: The cache pointer > - */ > -void kmem_cache_shrink_all(struct kmem_cache *s) > -{ > - struct kmem_cache *c; > - > - if (!IS_ENABLED(CONFIG_MEMCG_KMEM) || !is_root_cache(s)) { > - kmem_cache_shrink(s); > - return; > - } > - > - get_online_cpus(); > - get_online_mems(); > - kasan_cache_shrink(s); > - __kmem_cache_shrink(s); > - > - /* > - * We have to take the slab_mutex to protect from the memcg list > - * modification. > - */ > - mutex_lock(&slab_mutex); > - for_each_memcg_cache(c, s) { > - /* > - * Don't need to shrink deactivated memcg caches. > - */ > - if (s->flags & SLAB_DEACTIVATED) > - continue; > - kasan_cache_shrink(c); > - __kmem_cache_shrink(c); > - } > - mutex_unlock(&slab_mutex); > - put_online_mems(); > - put_online_cpus(); > -} > - > bool slab_is_available(void) > { > return slab_state >= UP; > diff --git a/mm/slub.c b/mm/slub.c > index 9bf44955c4f1..183ccc364ccf 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -5343,7 +5343,7 @@ static ssize_t shrink_store(struct kmem_cache *s, > const char *buf, size_t length) > { > if (buf[0] == '1') > - kmem_cache_shrink_all(s); > + kmem_cache_shrink(s); > else > return -EINVAL; > return length; > -- > 2.18.1 > >