Johannes Weiner wrote: > On Fri, Dec 01, 2017 at 04:17:15PM +0100, Michal Hocko wrote: > > On Fri 01-12-17 14:56:38, Johannes Weiner wrote: > > > On Fri, Dec 01, 2017 at 03:46:34PM +0100, Michal Hocko wrote: > > > > On Fri 01-12-17 14:33:17, Johannes Weiner wrote: > > > > > On Sat, Nov 25, 2017 at 07:52:47PM +0900, Tetsuo Handa wrote: > > > > > > @@ -1068,6 +1071,17 @@ bool out_of_memory(struct oom_control *oc) > > > > > > } > > > > > > > > > > > > select_bad_process(oc); > > > > > > + /* > > > > > > + * Try really last second allocation attempt after we selected an OOM > > > > > > + * victim, for somebody might have managed to free memory while we were > > > > > > + * selecting an OOM victim which can take quite some time. > > > > > > > > > > Somebody might free some memory right after this attempt fails. OOM > > > > > can always be a temporary state that resolves on its own. "[PATCH 3/3] mm,oom: Remove oom_lock serialization from the OOM reaper." says that doing last second allocation attempt after select_bad_process() should help the OOM reaper to free memory compared to doing last second allocation before select_bad_process(). > > > > > > > > > > What keeps us from declaring OOM prematurely is the fact that we > > > > > already scanned the entire LRU list without success, not last second > > > > > or last-last second, or REALLY last-last-last-second allocations. > > > > > > > > You are right that this is inherently racy. The point here is, however, > > > > that the race window between the last check and the kill can be _huge_! > > > > > > My point is that it's irrelevant. We already sampled the entire LRU > > > list; compared to that, the delay before the kill is immaterial. > > > > Well, I would disagree. I have seen OOM reports with a free memory. > > Closer debugging shown that an existing process was on the way out and > > the oom victim selection took way too long and fired after a large > > process manage. There were different hacks^Wheuristics to cover those > > cases but they turned out to just cause different corner cases. Moving > > the existing last moment allocation after a potentially very time > > consuming action is relatively cheap and safe measure to cover those > > cases without any negative side effects I can think of. > > An existing process can exit right after you pull the trigger. How big > is *that* race window? By this logic you could add a sleep(5) before > the last-second allocation because it would increase the likelihood of > somebody else exiting voluntarily. Sleeping with oom_lock held is bad. Even schedule_timeout_killable(1) at out_of_memory() can allow the owner of oom_lock sleep effectively forever when many threads are hitting mutex_trylock(&oom_lock) at __alloc_pages_may_oom(). Let alone adding sleep(5) before sending SIGKILL and waking up the OOM reaper. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>