On Mon, Jul 27, 2020 at 10:12 PM Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: > > From: Muchun Song <songmuchun@xxxxxxxxxxxxx> > > commit d38a2b7a9c939e6d7329ab92b96559ccebf7b135 upstream. > > If the kmem_cache refcount is greater than one, we should not mark the > root kmem_cache as dying. If we mark the root kmem_cache dying > incorrectly, the non-root kmem_cache can never be destroyed. It > resulted in memory leak when memcg was destroyed. We can use the > following steps to reproduce. > > 1) Use kmem_cache_create() to create a new kmem_cache named A. > 2) Coincidentally, the kmem_cache A is an alias for kmem_cache B, > so the refcount of B is just increased. > 3) Use kmem_cache_destroy() to destroy the kmem_cache A, just > decrease the B's refcount but mark the B as dying. > 4) Create a new memory cgroup and alloc memory from the kmem_cache > B. It leads to create a non-root kmem_cache for allocating memory. > 5) When destroy the memory cgroup created in the step 4), the > non-root kmem_cache can never be destroyed. > > If we repeat steps 4) and 5), this will cause a lot of memory leak. So > only when refcount reach zero, we mark the root kmem_cache as dying. > > Fixes: 92ee383f6daa ("mm: fix race between kmem_cache destroy, create and deactivate") > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx> > Acked-by: Roman Gushchin <guro@xxxxxx> > Cc: Vlastimil Babka <vbabka@xxxxxxx> > Cc: Christoph Lameter <cl@xxxxxxxxx> > Cc: Pekka Enberg <penberg@xxxxxxxxxx> > Cc: David Rientjes <rientjes@xxxxxxxxxx> > Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > Link: http://lkml.kernel.org/r/20200716165103.83462-1-songmuchun@xxxxxxxxxxxxx > Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> > Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> > > --- > mm/slab_common.c | 35 ++++++++++++++++++++++++++++------- > 1 file changed, 28 insertions(+), 7 deletions(-) > > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -310,6 +310,14 @@ int slab_unmergeable(struct kmem_cache * > if (s->refcount < 0) > return 1; > > +#ifdef CONFIG_MEMCG_KMEM > + /* > + * Skip the dying kmem_cache. > + */ > + if (s->memcg_params.dying) > + return 1; > +#endif > + > return 0; > } > > @@ -832,12 +840,15 @@ static int shutdown_memcg_caches(struct > return 0; > } > > -static void flush_memcg_workqueue(struct kmem_cache *s) > +static void memcg_set_kmem_cache_dying(struct kmem_cache *s) > { > mutex_lock(&slab_mutex); > s->memcg_params.dying = true; > mutex_unlock(&slab_mutex); We should remove mutex_lock/unlock(&slab_mutex) here, because we already hold the slab_mutex from kmem_cache_destroy(). > +} > > +static void flush_memcg_workqueue(struct kmem_cache *s) > +{ > /* > * SLUB deactivates the kmem_caches through call_rcu_sched. Make > * sure all registered rcu callbacks have been invoked. > @@ -858,10 +869,6 @@ static inline int shutdown_memcg_caches( > { > return 0; > } > - > -static inline void flush_memcg_workqueue(struct kmem_cache *s) > -{ > -} > #endif /* CONFIG_MEMCG_KMEM */ > > void slab_kmem_cache_release(struct kmem_cache *s) > @@ -879,8 +886,6 @@ void kmem_cache_destroy(struct kmem_cach > if (unlikely(!s)) > return; > > - flush_memcg_workqueue(s); > - > get_online_cpus(); > get_online_mems(); > > @@ -890,6 +895,22 @@ void kmem_cache_destroy(struct kmem_cach > if (s->refcount) > goto out_unlock; > > +#ifdef CONFIG_MEMCG_KMEM > + memcg_set_kmem_cache_dying(s); > + > + mutex_unlock(&slab_mutex); > + > + put_online_mems(); > + put_online_cpus(); > + > + flush_memcg_workqueue(s); > + > + get_online_cpus(); > + get_online_mems(); > + > + mutex_lock(&slab_mutex); > +#endif > + > err = shutdown_memcg_caches(s); > if (!err) > err = shutdown_cache(s); > > -- Yours, Muchun