Re: [PATCH] memcg: killed threads should not invoke memcg OOM killer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Tetsuo,

On 26.12.2018 13:13, Tetsuo Handa wrote:
> It is possible that a single process group memcg easily swamps the log
> with no-eligible OOM victim messages after current thread was OOM-killed,
> due to race between the memcg charge and the OOM reaper [1].
> 
> Thread-1                 Thread-2                       OOM reaper
> try_charge()
>   mem_cgroup_out_of_memory()
>     mutex_lock(oom_lock)
>                         try_charge()
>                           mem_cgroup_out_of_memory()
>                             mutex_lock(oom_lock)
>     out_of_memory()
>       select_bad_process()
>       oom_kill_process(current)
>       wake_oom_reaper()
>                                                         oom_reap_task()
>                                                         # sets MMF_OOM_SKIP
>     mutex_unlock(oom_lock)
>                             out_of_memory()
>                               select_bad_process() # no task
>                             mutex_unlock(oom_lock)
> 
> We don't need to invoke the memcg OOM killer if current thread was killed
> when waiting for oom_lock, for mem_cgroup_oom_synchronize(true) and
> memory_max_write() can bail out upon SIGKILL, and try_charge() allows
> already killed/exiting threads to make forward progress.
> 
> Michal has a plan to use tsk_is_oom_victim() by calling mark_oom_victim()
> on all thread groups sharing victim's mm. But fatal_signal_pending() in
> this patch helps regardless of Michal's plan because it will avoid
> needlessly calling out_of_memory() when current thread is already
> terminating (e.g. got SIGINT after passing fatal_signal_pending() check
> in try_charge() and mutex_lock_killable() did not block).
> 
> [1] https://lkml.kernel.org/r/ea637f9a-5dd0-f927-d26d-d0b4fd8ccb6f@xxxxxxxxxxxxxxxxxxx
> 
> Signed-off-by: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
> ---
>  mm/memcontrol.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b860dd4f7..b0d3bf3 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1389,8 +1389,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  	};
>  	bool ret;
>  
> -	mutex_lock(&oom_lock);
> -	ret = out_of_memory(&oc);
> +	if (mutex_lock_killable(&oom_lock))
> +		return true;
> +	/*
> +	 * A few threads which were not waiting at mutex_lock_killable() can
> +	 * fail to bail out. Therefore, check again after holding oom_lock.
> +	 */
> +	ret = fatal_signal_pending(current) || out_of_memory(&oc);

This fatal_signal_pending() check has a sense because of
it's possible, a killed task is waking up slowly, and it
returns from schedule(), when there are no more waiters
for a lock.

Why not make this approach generic, and add a check into
__mutex_lock_common() after schedule_preempt_disabled()
instead of this? This will handle all the places like
that at once.

(The only adding a check is not enough for __mutex_lock_common(),
 since mutex code will require to wake next waiter also. So,
 you will need a couple of changes in mutex code).

Kirill

>  	mutex_unlock(&oom_lock);
>  	return ret;
>  }
> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux