On 2018/11/07 19:08, Michal Hocko wrote: > On Wed 07-11-18 18:45:27, Tetsuo Handa wrote: >> On 2018/11/06 21:42, Michal Hocko wrote: >>> On Tue 06-11-18 18:44:43, Tetsuo Handa wrote: >>> [...] >>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >>>> index 6e1469b..a97648a 100644 >>>> --- a/mm/memcontrol.c >>>> +++ b/mm/memcontrol.c >>>> @@ -1382,8 +1382,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, >>>> }; >>>> bool ret; >>>> >>>> - mutex_lock(&oom_lock); >>>> - ret = out_of_memory(&oc); >>>> + if (mutex_lock_killable(&oom_lock)) >>>> + return true; >>>> + /* >>>> + * A few threads which were not waiting at mutex_lock_killable() can >>>> + * fail to bail out. Therefore, check again after holding oom_lock. >>>> + */ >>>> + ret = fatal_signal_pending(current) || out_of_memory(&oc); >>>> mutex_unlock(&oom_lock); >>>> return ret; >>>> } >>> >>> If we are goging with a memcg specific thingy then I really prefer >>> tsk_is_oom_victim approach. Or is there any reason why this is not >>> suitable? >>> >> >> Why need to wait for mark_oom_victim() called after slow printk() messages? >> >> If current thread got Ctrl-C and thus current thread can terminate, what is >> nice with waiting for the OOM killer? If there are several OOM events in >> multiple memcg domains waiting for completion of printk() messages? I don't >> see points with waiting for oom_lock, for try_charge() already allows current >> thread to terminate due to fatal_signal_pending() test. > > mutex_lock_killable would take care of exiting task already. I would > then still prefer to check for mark_oom_victim because that is not racy > with the exit path clearing signals. I can update my patch to use > _killable lock variant if we are really going with the memcg specific > fix. > > Johaness? > No response for one month. When can we get to an RCU stall problem syzbot reported?