Re: [RFC PATCH 2/2] mm,oom: Try last second allocation after selecting an OOM victim.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 17-10-17 22:04:59, Tetsuo Handa wrote:
[...]
> I checked http://lkml.kernel.org/r/20160128163802.GA15953@xxxxxxxxxxxxxx but
> I didn't find reason to use high watermark for the last second allocation
> attempt. The only thing required for avoiding livelock will be "do not
> depend on __GFP_DIRECT_RECLAIM allocation while oom_lock is held".

Andrea tried to explain it http://lkml.kernel.org/r/20160128190204.GJ12228@xxxxxxxxxx
"
: Elaborating the comment: the reason for the high wmark is to reduce
: the likelihood of livelocks and be sure to invoke the OOM killer, if
: we're still under pressure and reclaim just failed. The high wmark is
: used to be sure the failure of reclaim isn't going to be ignored. If
: using the min wmark like you propose there's risk of livelock or
: anyway of delayed OOM killer invocation.
: 
: The reason for doing one last wmark check (regardless of the wmark
: used) before invoking the oom killer, was just to be sure another OOM
: killer invocation hasn't already freed a ton of memory while we were
: stuck in reclaim. A lot of free memory generated by the OOM killer,
: won't make a parallel reclaim more likely to succeed, it just creates
: free memory, but reclaim only succeeds when it finds "freeable" memory
: and it makes progress in converting it to free memory. So for the
: purpose of this last check, the high wmark would work fine as lots of
: free memory would have been generated in such case.
"

I've had some problems with this reasoning for the current OOM killer
logic but I haven't been convincing enough. Maybe you will have a better
luck.

> Below is updated patch. The motivation of this patch is to guarantee that
> the thread (it can be SCHED_IDLE priority) calling out_of_memory() can use
> enough CPU resource by saving CPU resource wasted by threads (they can be
> !SCHED_IDLE priority) waiting for out_of_memory(). Thus, replace
> mutex_trylock() with mutex_lock_killable().

So what exactly guanratees SCHED_IDLE running while other high priority
processes keep preempting it while it holds the oom lock? Not everybody
is inside the allocation path to get out of the way.
> 
> By replacing mutex_trylock() with mutex_lock_killable(), it might prevent
> the OOM reaper from start reaping immediately. Thus, remove mutex_lock() from
> the OOM reaper.

oom_lock shouldn't be necessary in oom_reaper anymore and that is worth
a separate patch.
 
> By removing mutex_lock() from the OOM reaper, the race window of needlessly
> selecting next OOM victim becomes wider, for the last second allocation
> attempt no longer waits for the OOM reaper. Thus, do the really last
> allocation attempt after selecting an OOM victim using the same watermark.
> 
> Can we go with this direction?

The patch is just too cluttered. You do not want to use
__alloc_pages_slowpath. get_page_from_freelist would be more
appropriate. Also doing alloc_pages_before_oomkill two times seems to be
excessive.

That being said, make sure you adrress all the concerns brought up by
Andrea and Johannes in the above email thread first.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux