This means that when we destroy a memcg cache that happened to be empty, those caches may take a lot of time to go away: removing the memcg reference won't destroy them - because there are pending references, and the empty pages will stay there, until a shrinker is called upon for any reason. In this patch, we will call kmem_cache_shrink for all dead caches that cannot be destroyed because of remaining pages. After shrinking, it is possible that it could be freed. If this is not the case, we'll schedule a lazy worker to keep trying. [ v2: also call verify_dead for the slab ] [ v3: use delayed_work to avoid calling verify_dead at every free] [ v6: do not spawn worker if work is already pending ] Signed-off-by: Glauber Costa <glommer@xxxxxxxxxxxxx> CC: Christoph Lameter <cl@xxxxxxxxx> CC: Pekka Enberg <penberg@xxxxxxxxxxxxxx> CC: Michal Hocko <mhocko@xxxxxxx> CC: Kamezawa Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> CC: Johannes Weiner <hannes@xxxxxxxxxxx> CC: Suleiman Souhlal <suleiman@xxxxxxxxxx> CC: Tejun Heo <tj@xxxxxxxxxx> --- include/linux/slab.h | 2 +- mm/memcontrol.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 50 insertions(+), 7 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index ef2314e..0df42db 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -214,7 +214,7 @@ struct memcg_cache_params { struct kmem_cache *root_cache; bool dead; atomic_t nr_pages; - struct work_struct destroy; + struct delayed_work destroy; }; }; }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 31da8bc..6e2575a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3048,12 +3048,35 @@ static void kmem_cache_destroy_work_func(struct work_struct *w) { struct kmem_cache *cachep; struct memcg_cache_params *p; + struct delayed_work *dw = to_delayed_work(w); - p = container_of(w, struct memcg_cache_params, destroy); + p = container_of(dw, struct memcg_cache_params, destroy); cachep = memcg_params_to_cache(p); - if (!atomic_read(&cachep->memcg_params->nr_pages)) + /* + * If we get down to 0 after shrink, we could delete right away. + * However, memcg_release_pages() already puts us back in the workqueue + * in that case. If we proceed deleting, we'll get a dangling + * reference, and removing the object from the workqueue in that case + * is unnecessary complication. We are not a fast path. + * + * Note that this case is fundamentally different from racing with + * shrink_slab(): if memcg_cgroup_destroy_cache() is called in + * kmem_cache_shrink, not only we would be reinserting a dead cache + * into the queue, but doing so from inside the worker racing to + * destroy it. + * + * So if we aren't down to zero, we'll just schedule a worker and try + * again + */ + if (atomic_read(&cachep->memcg_params->nr_pages) != 0) { + kmem_cache_shrink(cachep); + if (atomic_read(&cachep->memcg_params->nr_pages) == 0) + return; + /* Once per minute should be good enough. */ + schedule_delayed_work(&cachep->memcg_params->destroy, 60 * HZ); + } else kmem_cache_destroy(cachep); } @@ -3063,10 +3086,30 @@ void mem_cgroup_destroy_cache(struct kmem_cache *cachep) return; /* + * There are many ways in which we can get here. + * + * We can get to a memory-pressure situation while the delayed work is + * still pending to run. The vmscan shrinkers can then release all + * cache memory and get us to destruction. If this is the case, we'll + * be executed twice, which is a bug (the second time will execute over + * bogus data). In this case, cancelling the work should be fine. + * + * But we can also get here from the worker itself, if + * kmem_cache_shrink is enough to shake all the remaining objects and + * get the page count to 0. In this case, we'll deadlock if we try to + * cancel the work (the worker runs with an internal lock held, which + * is the same lock we would hold for cancel_delayed_work_sync().) + * + * Since we can't possibly know who got us here, just refrain from + * running if there is already work pending + */ + if (delayed_work_pending(&cachep->memcg_params->destroy)) + return; + /* * We have to defer the actual destroying to a workqueue, because * we might currently be in a context that cannot sleep. */ - schedule_work(&cachep->memcg_params->destroy); + schedule_delayed_work(&cachep->memcg_params->destroy, 0); } static char *memcg_cache_name(struct mem_cgroup *memcg, struct kmem_cache *s) @@ -3218,9 +3261,9 @@ static void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg) list_for_each_entry(params, &memcg->memcg_slab_caches, list) { cachep = memcg_params_to_cache(params); cachep->memcg_params->dead = true; - INIT_WORK(&cachep->memcg_params->destroy, - kmem_cache_destroy_work_func); - schedule_work(&cachep->memcg_params->destroy); + INIT_DELAYED_WORK(&cachep->memcg_params->destroy, + kmem_cache_destroy_work_func); + schedule_delayed_work(&cachep->memcg_params->destroy, 0); } mutex_unlock(&memcg->slab_caches_mutex); } -- 1.7.11.7 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>