Re: [PATCH 2/2] memcg: do not report racy no-eligible OOM tasks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 08-01-19 19:39:58, Tetsuo Handa wrote:
> On 2019/01/08 17:14, Michal Hocko wrote:
> >>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >>> index af7f18b32389..90eb2e2093e7 100644
> >>> --- a/mm/memcontrol.c
> >>> +++ b/mm/memcontrol.c
> >>> @@ -1387,10 +1387,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
> >>>  		.gfp_mask = gfp_mask,
> >>>  		.order = order,
> >>>  	};
> >>> -	bool ret;
> >>> +	bool ret = true;
> >>>  
> >>>  	mutex_lock(&oom_lock);
> >>
> >> And because of "[PATCH 1/2] mm, oom: marks all killed tasks as oom
> >> victims", mark_oom_victim() will be called on current thread even if
> >> we used mutex_lock_killable(&oom_lock) here, like you said
> >>
> >>   mutex_lock_killable would take care of exiting task already. I would
> >>   then still prefer to check for mark_oom_victim because that is not racy
> >>   with the exit path clearing signals. I can update my patch to use
> >>   _killable lock variant if we are really going with the memcg specific
> >>   fix.
> >>
> >> . If current thread is not yet killed by the OOM killer but can terminate
> >> without invoking the OOM killer, using mutex_lock_killable(&oom_lock) here
> >> saves some processes. What is the race you are referring by "racy with the
> >> exit path clearing signals" ?
> > 
> > This is unrelated to the patch.
> 
> Ultimately related! This is the reasoning why your patch should be preferred
> over my patch.

No! I've said I do not mind using mutex_lock_killable on top of this
patch. I just want to have this fix minimal.

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux