On Fri 05-04-13 14:19:30, Glauber Costa wrote: > > > * __mem_cgroup_free will issue static_key_slow_dec because this > > * memcg is active already. If the later initialization fails > > * then the cgroup core triggers the cleanup so we do not have > > * to do it here. > > */ > >> - mem_cgroup_get(memcg); > >> static_key_slow_inc(&memcg_kmem_enabled_key); > >> > >> mutex_lock(&set_limit_mutex); > >> @@ -5823,23 +5814,33 @@ static int memcg_init_kmem(struct mem_cgroup *memcg, struct cgroup_subsys *ss) > >> return mem_cgroup_sockets_init(memcg, ss); > >> }; > >> > >> -static void kmem_cgroup_destroy(struct mem_cgroup *memcg) > >> +static void kmem_cgroup_css_offline(struct mem_cgroup *memcg) > >> { > >> - mem_cgroup_sockets_destroy(memcg); > >> + /* > >> + * kmem charges can outlive the cgroup. In the case of slab > >> + * pages, for instance, a page contain objects from various > >> + * processes, so it is unfeasible to migrate them away. We > >> + * need to reference count the memcg because of that. > >> + */ > > > > I would prefer if we could merge all three comments in this function > > into a single one. What about something like the following? > > /* > > * kmem charges can outlive the cgroup. In the case of slab > > * pages, for instance, a page contain objects from various > > * processes. As we prevent from taking a reference for every > > * such allocation we have to be careful when doing uncharge > > * (see memcg_uncharge_kmem) and here during offlining. > > * The idea is that that only the _last_ uncharge which sees > > * the dead memcg will drop the last reference. An additional > > * reference is taken here before the group is marked dead > > * which is then paired with css_put during uncharge resp. here. > > * Although this might sound strange as this path is called when > > * the reference has already dropped down to 0 and shouldn't be > > * incremented anymore (css_tryget would fail) we do not have > > * other options because of the kmem allocations lifetime. > > */ > >> + css_get(&memcg->css); > > > > I think that you need a write memory barrier here because css_get > > nor memcg_kmem_mark_dead implies it. memcg_uncharge_kmem uses > > memcg_kmem_test_and_clear_dead which imply a full memory barrier but it > > should see the elevated reference count. No? > > > > We don't use barriers for any other kind of reference counting. What is > different here? Now we need to make sure that the racing uncharge sees an elevated reference count before the group is marked dead. Otherwise we could see a dead group with ref count == 0, no? -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html