On Tue 23-10-18 10:01:08, Tetsuo Handa wrote: > Michal Hocko wrote: > > On Mon 22-10-18 20:45:17, Tetsuo Handa wrote: > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > > index e79cb59552d9..a9dfed29967b 100644 > > > > --- a/mm/memcontrol.c > > > > +++ b/mm/memcontrol.c > > > > @@ -1380,10 +1380,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > > > .gfp_mask = gfp_mask, > > > > .order = order, > > > > }; > > > > - bool ret; > > > > + bool ret = true; > > > > > > > > mutex_lock(&oom_lock); > > > > + > > > > + /* > > > > + * multi-threaded tasks might race with oom_reaper and gain > > > > + * MMF_OOM_SKIP before reaching out_of_memory which can lead > > > > + * to out_of_memory failure if the task is the last one in > > > > + * memcg which would be a false possitive failure reported > > > > + */ > > > > + if (tsk_is_oom_victim(current)) > > > > + goto unlock; > > > > + > > > > > > This is not wrong but is strange. We can use mutex_lock_killable(&oom_lock) > > > so that any killed threads no longer wait for oom_lock. > > > > tsk_is_oom_victim is stronger because it doesn't depend on > > fatal_signal_pending which might be cleared throughout the exit process. > > > > I still want to propose this. No need to be memcg OOM specific. Well, I maintain what I've said [1] about simplicity and specific fix for a specific issue. Especially in the tricky code like this where all the consequences are far more subtle than they seem to be. This is obviously a matter of taste but I don't see much point discussing this back and forth for ever. Unless there is a general agreement that the above is less appropriate then I am willing to consider a different change but I simply do not have energy to nit pick for ever. [1] http://lkml.kernel.org/r/20181022134315.GF18839@xxxxxxxxxxxxxx -- Michal Hocko SUSE Labs