On Fri 10-07-20 23:18:01, Yafang Shao wrote: > If the current's MMF_OOM_SKIP is set, it means that the current is exiting > or dying and likely to realease its address space. That is not actually true. The primary reason for this flag is to tell that the task is no longer relevant for the oom victim selection because most of its memory has been released. But the task might be stuck at many places and waiting for it to terminate might easily lockup the system. The design of the oom reaper is to guarantee a forward progress if when the victim cannot make a forward progress on its own. For that to work the oom killer cannot relly rely on the victim's state or that it would finish. If you remove this fundamental assumption then the oom killer can lockup again. > So we don't need to > invoke the oom killer again. Otherwise that may cause some unexpected > issues, for example, bellow is the issue found in our production > environment. Please see the above. > There're many threads of a multi-threaded task parallel running in a > container on many cpus. Then many threads triggered OOM at the same time, > > CPU-1 CPU-2 ... CPU-n > thread-1 thread-2 ... thread-n > > wait oom_lock wait oom_lock ... hold oom_lock > > (sigkill received) > > select current as victim > and wakeup oom reaper > > release oom_lock > > (MMF_OOM_SKIP set by oom reaper) > > (lots of pages are freed) > hold oom_lock Could you be more specific please? The page allocator never waits for the oom_lock and keeps retrying instead. Also __alloc_pages_may_oom tries to allocate with the lock held. Could you provide oom reports please? -- Michal Hocko SUSE Labs