Re: [PATCH 1/3] cgroup: remove tasklist_lock from cgroup_attach_proc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 23, 2011 at 03:27:45AM +0100, Frederic Weisbecker wrote:
> On Thu, Dec 22, 2011 at 04:57:51PM -0800, Mandeep Singh Baines wrote:
> > Since cgroup_attach_proc is protected by a threadgroup_lock, we
> > no longer need a tasklist_lock to protect while_each_thread.
> > To keep the complexity of the double-check locking in one place,
> > I also moved the thread_group_leader check up into
> > attach_task_by_pid.
> > 
> > While at it, also converted a couple of returns to gotos.
> > 
> > The suggestion was made here:
> > 
> > https://lkml.org/lkml/2011/12/22/86
> > 
> > Suggested-by: Frederic Weisbecker <fweisbec@xxxxxxxxx>
> > Signed-off-by: Mandeep Singh Baines <msb@xxxxxxxxxxxx>
> > Cc: Tejun Heo <tj@xxxxxxxxxx>
> > Cc: Li Zefan <lizf@xxxxxxxxxxxxxx>
> > Cc: containers@xxxxxxxxxxxxxxxxxxxxxxxxxx
> > Cc: cgroups@xxxxxxxxxxxxxxx
> > Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
> > Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> > Cc: Paul Menage <paul@xxxxxxxxxxxxxx>
> > ---
> >  kernel/cgroup.c |   52 +++++++++++++++++++++-------------------------------
> >  1 files changed, 21 insertions(+), 31 deletions(-)
> > 
> > diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> > index 1042b3c..032139d 100644
> > --- a/kernel/cgroup.c
> > +++ b/kernel/cgroup.c
> > @@ -2102,21 +2102,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> >  	if (retval)
> >  		goto out_free_group_list;
> >  
> > -	/* prevent changes to the threadgroup list while we take a snapshot. */
> > -	read_lock(&tasklist_lock);
> > -	if (!thread_group_leader(leader)) {
> > -		/*
> > -		 * a race with de_thread from another thread's exec() may strip
> > -		 * us of our leadership, making while_each_thread unsafe to use
> > -		 * on this task. if this happens, there is no choice but to
> > -		 * throw this task away and try again (from cgroup_procs_write);
> > -		 * this is "double-double-toil-and-trouble-check locking".
> > -		 */
> > -		read_unlock(&tasklist_lock);
> > -		retval = -EAGAIN;
> > -		goto out_free_group_list;
> > -	}
> > -
> >  	tsk = leader;
> >  	i = 0;
> >  	do {
> > @@ -2145,7 +2130,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> >  	group_size = i;
> >  	tset.tc_array = group;
> >  	tset.tc_array_len = group_size;
> > -	read_unlock(&tasklist_lock);
> 
> You still need rcu_read_lock()/rcu_read_unlock() around
> 	do {
> 
> 	} while_each_thread()
> 
> because threadgroup_lock() doesn't lock the part that remove a thread from
> its group on exit.

Actually while_each_thread() takes care of the thread group list safe
walking. But we need RCU to ensure the task is not released in parallel.
threadgroup_lock() doesn't synchronize against that if the task has
already passed the setting of PF_EXITING.
_______________________________________________
Containers mailing list
Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/containers


[Index of Archives]     [Cgroups]     [Netdev]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux