On 2018/12/28 19:22, Kirill Tkhai wrote: >> @@ -1389,8 +1389,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, >> }; >> bool ret; >> >> - mutex_lock(&oom_lock); >> - ret = out_of_memory(&oc); >> + if (mutex_lock_killable(&oom_lock)) >> + return true; >> + /* >> + * A few threads which were not waiting at mutex_lock_killable() can >> + * fail to bail out. Therefore, check again after holding oom_lock. >> + */ >> + ret = fatal_signal_pending(current) || out_of_memory(&oc); > > This fatal_signal_pending() check has a sense because of > it's possible, a killed task is waking up slowly, and it > returns from schedule(), when there are no more waiters > for a lock. Thanks. Michal thinks that mutex_lock_killable() would be sufficient ( https://lkml.kernel.org/r/20181107100810.GA27423@xxxxxxxxxxxxxx ) but I can confirm that mutex_lock_killable() is not sufficient when I test using a VM with 8 CPUs. Thus, I'd like to keep this fatal_signal_pending() check. > > Why not make this approach generic, and add a check into > __mutex_lock_common() after schedule_preempt_disabled() > instead of this? This will handle all the places like > that at once. > > (The only adding a check is not enough for __mutex_lock_common(), > since mutex code will require to wake next waiter also. So, > you will need a couple of changes in mutex code). I think that we should not assume that everybody is ready for making mutex_lock_killable() to return -EINTR if fatal_signal_pending() is true, and that adding below version would be a safer choice. int __sched mutex_lock_unless_killed(struct mutex *lock) { const int ret = mutex_lock_killable(lock); if (ret) return ret; if (fatal_signale_pending(current)) { mutex_unlock(lock); return -EINTR; } return 0; }