Michal Hocko wrote: > Look. I am fed up with this discussion. You are fiddling with the code > and moving hacks around with a lot of hand waving. Rahter than trying to > look at the underlying problem. Your patch completely ignores PREEMPT as > I've mentioned in previous versions. I'm not ignoring PREEMPT. To fix this OOM lockup problem properly, as much efforts as fixing Spectre/Meltdown problems will be required. This patch is a mitigation for regression introduced by fixing CVE-2018-1000200. Nothing is good with deferring this patch. > I would be OK with removing the sleep from the out_of_memory path based > on your argumentation that we have a _proper_ synchronization with the > exit path now. Such attempt should be made in a separate patch. You suggested removing this sleep from my patch without realizing that we need explicit schedule_timeout_*() for PF_WQ_WORKER threads. My patch is trying to be as conservative/safe as possible (for easier backport) while reducing the risk of falling into OOM lockup. I worry that you are completely overlooking char *fmt, ...) */ if (!mutex_trylock(&oom_lock)) { *did_some_progress = 1; - schedule_timeout_uninterruptible(1); return NULL; } part in this patch. Currently, the short sleep is so random/inconsistent that schedule_timeout_uninterruptible(1) is called when we failed to grab oom_lock (even if current thread was already marked as an OOM victim), schedule_timeout_killable(1) is called when we killed a new OOM victim, and no sleep at all if we found that there are inflight OOM victims. This patch centralized the location to call schedule_timeout_uninterruptible(1) to "goto retry;" path so that current thread surely yields CPU resource to the owner of oom_lock. You are free to propose removing this centralized sleep after my change is applied. Of course, you are responsible for convincing that removing this centralized sleep (unless PF_WQ_WORKER threads) does not negatively affect the owner of oom_lock (e.g. a SCHED_IDLE thread who is holding oom_lock gets blocked longer than mine).