On Thu 22-12-16 22:33:40, Tetsuo Handa wrote: > Tetsuo Handa wrote: > > Now, what options are left other than replacing !mutex_trylock(&oom_lock) > > with mutex_lock_killable(&oom_lock) which also stops wasting CPU time? > > Are we waiting for offloading sending to consoles? > > From http://lkml.kernel.org/r/20161222115057.GH6048@xxxxxxxxxxxxxx : > > > Although I don't know whether we agree with mutex_lock_killable(&oom_lock) > > > change, I think this patch alone can go as a cleanup. > > > > No, we don't agree on that part. As this is a printk issue I do not want > > to workaround it in the oom related code. That is just ridiculous. The > > very same issue would be possible due to other continous source of log > > messages. > > I don't think so. Lockup caused by printk() is printk's problem. But printk > is not the only source of lockup. If CONFIG_PREEMPT=y, it is possible that > a thread which held oom_lock can sleep for unbounded period depending on > scheduling priority. Unless there is some runaway realtime process then the holder of the oom lock shouldn't be preempted for the _unbounded_ amount of time. It might take quite some time, though. But that is not reduced to the OOM killer. Any important part of the system (IO flushers and what not) would suffer from the same issue. > Then, you call such latency as scheduler's problem? > mutex_lock_killable(&oom_lock) change helps coping with whatever delays > OOM killer/reaper might encounter. It helps _your_ particular insane workload. I believe you can construct many others which which would cause a similar problem and the above suggestion wouldn't help a bit. Until I can see this is easily triggerable on a reasonably configured system then I am not convinced we should add more non trivial changes to the oom killer path. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>