On Tue, Aug 03, 2010 at 09:30:00PM -0700, Paul Menage wrote: > >> --- a/kernel/cpuset.c > >> +++ b/kernel/cpuset.c > >> @@ -1404,6 +1404,10 @@ static int cpuset_can_attach(struct cgroup_subsys *ss, struct cgroup *cont, > >> ? ? ? ? ? ? ? struct task_struct *c; > >> > >> ? ? ? ? ? ? ? rcu_read_lock(); > >> + ? ? ? ? ? ? if (!thread_group_leader(tsk)) { > >> + ? ? ? ? ? ? ? ? ? ? rcu_read_unlock(); > >> + ? ? ? ? ? ? ? ? ? ? return -EAGAIN; > >> + ? ? ? ? ? ? } > > Why are you adding this requirement, here and in sched.c? (ns_cgroup.c > doesn't matter since it's being deleted). > > Paul It was either this or: rcu_read_lock(); for_each_subsys(...) { can_attach(...); } rcu_read_unlock(); Which forces all can_attaches to not sleep. So by dropping rcu_read_lock(), we allow the possibility of the exec race I described in my last email, and therefore we have to check each time we re-acquire rcu_read to iterate thread_group. Yeah, it is not pretty. I call it "double-double-toil-and-trouble-check locking". But it is safe. -- Ben _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers