Re: [PATCH] mm: mempolicy: don't select exited threads as OOM victims

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019/07/01 23:16, Michal Hocko wrote:
> Thinking about it some more it seems that we can go with your original
> fix if we also reorder oom_evaluate_task
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index f719b64741d6..e5feb0f72e3b 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -318,9 +318,6 @@ static int oom_evaluate_task(struct task_struct *task, void *arg)
>  	struct oom_control *oc = arg;
>  	unsigned long points;
>  
> -	if (oom_unkillable_task(task, NULL, oc->nodemask))
> -		goto next;
> -
>  	/*
>  	 * This task already has access to memory reserves and is being killed.
>  	 * Don't allow any other task to have access to the reserves unless
> @@ -333,6 +330,9 @@ static int oom_evaluate_task(struct task_struct *task, void *arg)
>  		goto abort;
>  	}
>  
> +	if (oom_unkillable_task(task, NULL, oc->nodemask))
> +		goto next;
> +
>  	/*
>  	 * If task is allocating a lot of memory and has been marked to be
>  	 * killed first if it triggers an oom, then select it.
> 
> I do not see any strong reason to keep the current ordering. OOM victim
> check is trivial so it shouldn't add a visible overhead for few
> unkillable tasks that we might encounter.
> 

Yes if we can tolerate that there can be only one OOM victim for !memcg OOM events
(because an OOM victim in a different OOM context will hit "goto abort;" path).



Thinking again, I think that the same problem exists for mask == NULL path
as long as "a process with dying leader and live threads" is possible. Then,
fixing up after has_intersects_mems_allowed()/cpuset_mems_allowed_intersects()
judged that some thread is eligible is better.

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index d1c9c4e..43e499e 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -109,8 +109,23 @@ static bool oom_cpuset_eligible(struct task_struct *start,
 			 */
 			ret = cpuset_mems_allowed_intersects(current, tsk);
 		}
-		if (ret)
-			break;
+		if (ret) {
+			/*
+			 * Exclude dead threads as ineligible when selecting
+			 * an OOM victim. But include dead threads as eligible
+			 * when waiting for OOM victims to get MMF_OOM_SKIP.
+			 *
+			 * Strictly speaking, tsk->mm should be checked under
+			 * task lock because cpuset_mems_allowed_intersects()
+			 * does not take task lock. But racing with exit_mm()
+			 * is not fatal. Thus, use cheaper barrier rather than
+			 * strict task lock.
+			 */
+			smp_rmb();
+			if (tsk->mm || tsk_is_oom_victim(tsk))
+				break;
+			ret = false;
+		}
 	}
 	rcu_read_unlock();
 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux