It was my (limited) understanding that the subsequent 2 patch set superseded this patch. Indeed, the 2 patch set seems to solve both the SLAB and SLUB bug reports. References: https://bugzilla.kernel.org/show_bug.cgi?id=172981 https://bugzilla.kernel.org/show_bug.cgi?id=172991 https://patchwork.kernel.org/patch/9361853 https://patchwork.kernel.org/patch/9359271 On 2016.10.05 23:21 Joonsoo Kim wrote: > From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > > There is a bug report that SLAB makes extreme load average due to > over 2000 kworker thread. > > https://bugzilla.kernel.org/show_bug.cgi?id=172981 > > This issue is caused by kmemcg feature that try to create new set of > kmem_caches for each memcg. Recently, kmem_cache creation is slowed by > synchronize_sched() and futher kmem_cache creation is also delayed > since kmem_cache creation is synchronized by a global slab_mutex lock. > So, the number of kworker that try to create kmem_cache increases quitely. > synchronize_sched() is for lockless access to node's shared array but > it's not needed when a new kmem_cache is created. So, this patch > rules out that case. > > Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache") > Cc: stable@xxxxxxxxxxxxxxx > Reported-by: Doug Smythies <dsmythies@xxxxxxxxx> > Tested-by: Doug Smythies <dsmythies@xxxxxxxxx> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > --- > mm/slab.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/slab.c b/mm/slab.c > index 6508b4d..3c83c29 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep, > * guaranteed to be valid until irq is re-enabled, because it will be > * freed after synchronize_sched(). > */ > - if (force_change) > + if (old_shared && force_change) > synchronize_sched(); > > fail: > -- > 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html