Re: why do we do ALLOC_WMARK_HIGH before going out_of_memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 28-01-16 18:40:18, Johannes Weiner wrote:
> On Thu, Jan 28, 2016 at 10:55:15PM +0100, Michal Hocko wrote:
> > On Thu 28-01-16 16:12:40, Johannes Weiner wrote:
> > > On Thu, Jan 28, 2016 at 09:11:23PM +0100, Michal Hocko wrote:
> > > > On Thu 28-01-16 20:02:04, Andrea Arcangeli wrote:
> > > > > It's not immediately apparent if there is a new OOM killer upstream
> > > > > logic that would prevent the risk of a second OOM killer invocation
> > > > > despite another OOM killing already happened while we were stuck in
> > > > > reclaim. In absence of that, the high wmark check would be still
> > > > > needed.
> > > > 
> > > > Well, my oom detection rework [1] strives to make the OOM detection more
> > > > robust and the retry logic performs the watermark check. So I think the
> > > > last attempt is no longer needed after that patch. I will then remove
> > > > it.
> > > 
> > > Hm? I don't have the same conclusion from what Andrea said.
> > > 
> > > When you have many allocations racing at the same time, they can all
> > > enter __alloc_pages_may_oom() in quick succession. We don't want a
> > > cavalcade of OOM kills when one could be enough, so we have to make
> > > sure that in between should_alloc_retry() giving up and acquiring the
> > > OOM lock nobody else already issued a kill and released enough memory.
> > > 
> > > It's a race window that gets yanked wide open when hundreds of threads
> > > race in __alloc_pages_may_oom(). Your patches don't fix that, AFAICS.
> > 
> > Only one task would be allowed to go out_of_memory and all the rest will
> > simply fail on oom_lock trylock and return with NULL. Or am I missing
> > your point?
> 
> Just picture it with mutex_lock() instead of mutex_trylock() and it
> becomes obvious why you have to do a locked check before the kill.
> 
> The race window is much smaller with the trylock of course, but given
> enough threads it's possible that one of the other contenders would
> acquire the trylock right after the first task drops it:
> 
> first task:                     204th task:
> !reclaim                        !reclaim
> !should_alloc_retry             !should_alloc_retry
> oom_trylock
> out_of_memory
> oom_unlock
>                                 oom_trylock
>                                 out_of_memory // likely unnecessary

That would require the oom victim to release the memory and drop
TIF_MEMDIE before we go out_of_memory again. And that might happen
anytime whether we are holding oom_trylock or not because it doesn't
synchronize the exit path. So we are basically talking about:

should_alloc_retry
[1]
get_page_from_freelist(ALLOC_WMARK_HIGH)
[2]
out_of_memory

and the race window for 1 is much smaller than 2 because [2] is quite
costly operation. I wonder if this last moment request ever succeeds. I
have run my usual oom flood tests and it hasn't shown up a single time.

That being said I do not care that much. I just find this confusing and
basically pointless because the whole thing is racy by definition and we
are trying to cover a smaller window. I would understand if we did such
a last attempt right before we are going to kill a selected victim. This
would cover much larger race window.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]