On Fri, Dec 23, 2011 at 09:28:44AM -0800, Mandeep Singh Baines wrote: > Since cgroup_attach_proc is protected by a threadgroup_lock, we > can replace the tasklist_lock in cgroup_attach_proc with an > rcu_read_lock. To keep the complexity of the double-check locking > in one place, I also moved the thread_group_leader check up into > attach_task_by_pid. This allows us to use a goto instead of > returning -EAGAIN. > > While at it, also converted a couple of returns to gotos. > > Changes in V3: > * https://lkml.org/lkml/2011/12/22/419 (Frederic Weisbecker) > * Add an rcu_read_lock to protect against exit > Changes in V2: > * https://lkml.org/lkml/2011/12/22/86 (Tejun Heo) > * Use a goto instead of returning -EAGAIN > > Suggested-by: Frederic Weisbecker <fweisbec@xxxxxxxxx> > Signed-off-by: Mandeep Singh Baines <msb@xxxxxxxxxxxx> > Cc: Tejun Heo <tj@xxxxxxxxxx> > Cc: Li Zefan <lizf@xxxxxxxxxxxxxx> > Cc: containers@xxxxxxxxxxxxxxxxxxxxxxxxxx > Cc: cgroups@xxxxxxxxxxxxxxx > Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > Cc: Oleg Nesterov <oleg@xxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Paul Menage <paul@xxxxxxxxxxxxxx> > --- > kernel/cgroup.c | 74 +++++++++++++++++++----------------------------------- > 1 files changed, 26 insertions(+), 48 deletions(-) > > diff --git a/kernel/cgroup.c b/kernel/cgroup.c > index 1042b3c..6ee1438 100644 > --- a/kernel/cgroup.c > +++ b/kernel/cgroup.c > @@ -2102,21 +2102,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader) > if (retval) > goto out_free_group_list; > > - /* prevent changes to the threadgroup list while we take a snapshot. */ > - read_lock(&tasklist_lock); > - if (!thread_group_leader(leader)) { > - /* > - * a race with de_thread from another thread's exec() may strip > - * us of our leadership, making while_each_thread unsafe to use > - * on this task. if this happens, there is no choice but to > - * throw this task away and try again (from cgroup_procs_write); > - * this is "double-double-toil-and-trouble-check locking". > - */ > - read_unlock(&tasklist_lock); > - retval = -EAGAIN; > - goto out_free_group_list; > - } > - > + rcu_read_lock(); Please add a comment to explain why we need this. This may not be obvious to other people that are not familiar with that code. > tsk = leader; > i = 0; Also you can move rcu_read_lock() straight here. The two above operations don't need to be protected. > do { > @@ -2145,7 +2131,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader) > group_size = i; > tset.tc_array = group; > tset.tc_array_len = group_size; > - read_unlock(&tasklist_lock); > + rcu_read_unlock(); In a similar way, you can move rcu_read_unlock() right after while_each_thread(). Thanks. _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/containers