Re: [patch 4/5] mm, oom: reduce dependency on tasklist_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 3 Jul 2012, Oleg Nesterov wrote:

> >  		      unsigned int points, unsigned long totalpages,
> >  		      struct mem_cgroup *memcg, nodemask_t *nodemask,
> > @@ -454,6 +462,7 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
> >  	 */
> >  	if (p->flags & PF_EXITING) {
> >  		set_tsk_thread_flag(p, TIF_MEMDIE);
> > +		put_task_struct(p);
> >  		return;
> >  	}
> >  
> > @@ -471,6 +480,7 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
> >  	 * parent.  This attempts to lose the minimal amount of work done while
> >  	 * still freeing memory.
> >  	 */
> > +	read_lock(&tasklist_lock);
> >  	do {
> >  		list_for_each_entry(child, &t->children, sibling) {
> >  			unsigned int child_points;
> > @@ -483,15 +493,26 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
> >  			child_points = oom_badness(child, memcg, nodemask,
> >  								totalpages);
> >  			if (child_points > victim_points) {
> > +				put_task_struct(victim);
> >  				victim = child;
> >  				victim_points = child_points;
> > +				get_task_struct(victim);
> >  			}
> >  		}
> >  	} while_each_thread(p, t);
> > +	read_unlock(&tasklist_lock);
> >  
> > -	victim = find_lock_task_mm(victim);
> > -	if (!victim)
> > +	rcu_read_lock();
> > +	p = find_lock_task_mm(victim);
> > +	if (!p) {
> > +		rcu_read_unlock();
> > +		put_task_struct(victim);
> >  		return;
> > +	} else if (victim != p) {
> > +		get_task_struct(p);
> > +		put_task_struct(victim);
> > +		victim = p;
> > +	}
> >  
> >  	/* mm cannot safely be dereferenced after task_unlock(victim) */
> >  	mm = victim->mm;
> > @@ -522,9 +543,11 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
> >  			task_unlock(p);
> >  			do_send_sig_info(SIGKILL, SEND_SIG_FORCED, p, true);
> >  		}
> > +	rcu_read_unlock();
> >  
> >  	set_tsk_thread_flag(victim, TIF_MEMDIE);
> >  	do_send_sig_info(SIGKILL, SEND_SIG_FORCED, victim, true);
> > +	put_task_struct(victim);
> 
> It seems to me we can avoid this get/put dance in oom_kill_process(),
> just you need to extend the rcu-protected area. In this case the caller
> of select_bad_process() does a single put_, and
> sysctl_oom_kill_allocating_task doesn't need get_task_struct(current).
> Look more clean/simple to me.
> 

We could grab rcu_read_lock() before the first tasklist scan and hold it 
until a process is killed, yes, but there's a higher liklihood that it 
will never be dropped for concurrent oom kills in the same way that the 
write-side of tasklist_lock is currently starved.  On a system with a 
large number of cpus this isn't even a rare situation to run into: the 
read lock will never be dropped on all cpus.  I've attempted to make it as 
fine-grained as possible and only hold it when absolutely required and use 
task references to keep the selected threads around until they are killed.

Let me know if you have a better solution to rcu read lock starvation.
--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux