Re: [PATCH 1/3] mm,oom: Move last second allocation to inside the OOM killer.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 01-12-17 14:56:38, Johannes Weiner wrote:
> On Fri, Dec 01, 2017 at 03:46:34PM +0100, Michal Hocko wrote:
> > On Fri 01-12-17 14:33:17, Johannes Weiner wrote:
> > > On Sat, Nov 25, 2017 at 07:52:47PM +0900, Tetsuo Handa wrote:
> > > > @@ -1068,6 +1071,17 @@ bool out_of_memory(struct oom_control *oc)
> > > >  	}
> > > >  
> > > >  	select_bad_process(oc);
> > > > +	/*
> > > > +	 * Try really last second allocation attempt after we selected an OOM
> > > > +	 * victim, for somebody might have managed to free memory while we were
> > > > +	 * selecting an OOM victim which can take quite some time.
> > > 
> > > Somebody might free some memory right after this attempt fails. OOM
> > > can always be a temporary state that resolves on its own.
> > > 
> > > What keeps us from declaring OOM prematurely is the fact that we
> > > already scanned the entire LRU list without success, not last second
> > > or last-last second, or REALLY last-last-last-second allocations.
> > 
> > You are right that this is inherently racy. The point here is, however,
> > that the race window between the last check and the kill can be _huge_!
> 
> My point is that it's irrelevant. We already sampled the entire LRU
> list; compared to that, the delay before the kill is immaterial.

Well, I would disagree. I have seen OOM reports with a free memory.
Closer debugging shown that an existing process was on the way out and
the oom victim selection took way too long and fired after a large
process manage. There were different hacks^Wheuristics to cover those
cases but they turned out to just cause different corner cases. Moving
the existing last moment allocation after a potentially very time
consuming action is relatively cheap and safe measure to cover those
cases without any negative side effects I can think of.

Anyway, if the delay is immaterial than the existing last-retry is
even more pointless because it is executed right _after_ we gave up
reclaim retries. Compare that to the select_bad_process time window. And
really, that can take quite a lot of time. Especially in weird priority
inversion situations.

> > Another argument is that the allocator itself could have changed its
> > allocation capabilities - e.g. become the OOM victim itself since the
> > last time it the allocator could have reflected that fact.
> 
> Can you outline how this would happen exactly?

http://lkml.kernel.org/r/20171101135855.bqg2kuj6ao2cicqi@xxxxxxxxxxxxxx

As I try to explain the workload is really pathological but this (resp.
the follow up based on this patch) as a workaround is moderately ugly
wrt. it actually can help.

> > > Nacked-by: Johannes Weiner <hannes@xxxxxxxxxxx>

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux