(2012/04/10 2:40), Tejun Heo wrote: > (cc'ing other memcg ppl just in case) > > Hello, > > I don't think the error handling is correct here. > > On Fri, Apr 06, 2012 at 08:04:10PM +0400, Glauber Costa wrote: >> The last man standing justifying the need for populate() is the >> sock memcg initialization functions. Now that we are able to pass >> a struct mem_cgroup instead of a struct cgroup to the socket >> initialization, there is nothing that stops us from initializing >> everything in create(). >> >> Signed-off-by: Glauber Costa <glommer@xxxxxxxxxxxxx> >> CC: Tejun Heo <tj@xxxxxxxxxx> >> CC: Li Zefan <lizefan@xxxxxxxxxx> >> --- > ... >> @@ -5010,7 +5010,9 @@ mem_cgroup_create(struct cgroup *cont) >> memcg->move_charge_at_immigrate = 0; >> mutex_init(&memcg->thresholds_lock); >> spin_lock_init(&memcg->move_lock); >> - return &memcg->css; >> + >> + if (!memcg_init_kmem(memcg, &mem_cgroup_subsys)) >> + return &memcg->css; >> free_out: >> __mem_cgroup_free(memcg); >> return ERR_PTR(error); > > So, the control is just falling through free_out: on kmem init > failure; however, there seem to be stuff which needs to be undone - > hotcpu_notifier() registration, which BTW seems incorrect even on its > own - unmounting and mounting again would probably make the same > notifier registered multiple times corrupting notification chain, and > ref inc on the parent. > ok, it should be fixed. > It probably would be best to reorganize the function slightly such > that, it's organized as... > > 1. alloc > 2. init stuff w/o other side effects > 3. make side effects > > and add kmemcg init at the end of the second step. > > Also, memcg maintainers, once the patches get updated and acked, I'd > like to route them through cgroup tree so that I can kill ->populate > there. cgroup/for-3.5 is stable branch which can be pulled into other > trees including the memcg one. Would that be okay? > Hm, I'm okay with that but.....Michal ? Thanks, -Kame -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html