This is not going to work. Will fix up and resend. Need to goto the top as suggested by Tejun. Mandeep Singh Baines (msb@xxxxxxxxxxxx) wrote: > By reading group_leader after taking the threadgroup_lock, we can > avoid the double-check locking. This removes the return of > -EAGAIN so we can cleanup cgroup_procs_write at the same time. > > The suggestion was made here: > > https://lkml.org/lkml/2011/12/22/371 > > Suggested-by: Tejun Heo <tj@xxxxxxxxxx> > Signed-off-by: Mandeep Singh Baines <msb@xxxxxxxxxxxx> > Cc: Li Zefan <lizf@xxxxxxxxxxxxxx> > Cc: containers@xxxxxxxxxxxxxxxxxxxxxxxxxx > Cc: cgroups@xxxxxxxxxxxxxxx > Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > Cc: Frederic Weisbecker <fweisbec@xxxxxxxxx> > Cc: Oleg Nesterov <oleg@xxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Paul Menage <paul@xxxxxxxxxxxxxx> > --- > kernel/cgroup.c | 40 ++++++---------------------------------- > 1 files changed, 6 insertions(+), 34 deletions(-) > > diff --git a/kernel/cgroup.c b/kernel/cgroup.c > index 032139d..a5f7d1b 100644 > --- a/kernel/cgroup.c > +++ b/kernel/cgroup.c > @@ -2234,9 +2234,6 @@ static int attach_task_by_pid(struct cgroup *cgrp, u64 pid, bool threadgroup) > ret= -ESRCH; > goto out_unlock_cgroup; > } > - /* we check later for a group_leader race with de_thread */ > - if (threadgroup) > - tsk = tsk->group_leader; > /* > * even if we're attaching all tasks in the thread group, we > * only need to check permissions on one of them. > @@ -2252,33 +2249,17 @@ static int attach_task_by_pid(struct cgroup *cgrp, u64 pid, bool threadgroup) > get_task_struct(tsk); > rcu_read_unlock(); > } else { > - if (threadgroup) > - tsk = current->group_leader; > - else > - tsk = current; > + tsk = current; > get_task_struct(tsk); > } > > threadgroup_lock(tsk); > - > - if (threadgroup) { > - if (!thread_group_leader(tsk)) { > - /* > - * a race with de_thread from another thread's exec() > - * may strip us of our leadership, if this happens, > - * there is no choice but to throw this task away and > - * try again (from cgroup_procs_write); this is > - * "double-double-toil-and-trouble-check locking". > - */ > - ret = -EAGAIN; > - goto out_unlock_threadgroup; > - } > - ret = cgroup_attach_proc(cgrp, tsk); > - } else > + if (threadgroup) > + ret = cgroup_attach_proc(cgrp, tsk->group_leader); > + else > ret = cgroup_attach_task(cgrp, tsk); > - > -out_unlock_threadgroup: > threadgroup_unlock(tsk); > + > put_task_struct(tsk); > out_unlock_cgroup: > cgroup_unlock(); > @@ -2292,16 +2273,7 @@ static int cgroup_tasks_write(struct cgroup *cgrp, struct cftype *cft, u64 pid) > > static int cgroup_procs_write(struct cgroup *cgrp, struct cftype *cft, u64 tgid) > { > - int ret; > - do { > - /* > - * attach_proc fails with -EAGAIN if threadgroup leadership > - * changes in the middle of the operation, in which case we need > - * to find the task_struct for the new leader and start over. > - */ > - ret = attach_task_by_pid(cgrp, tgid, true); > - } while (ret == -EAGAIN); > - return ret; > + return attach_task_by_pid(cgrp, tgid, true); > } > > /** > -- > 1.7.3.1 > _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/containers