+ mm-oom-marks-all-killed-tasks-as-oom-victims.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm, oom: mark all killed tasks as oom victims
has been added to the -mm tree.  Its filename is
     mm-oom-marks-all-killed-tasks-as-oom-victims.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-oom-marks-all-killed-tasks-as-oom-victims.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-oom-marks-all-killed-tasks-as-oom-victims.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Michal Hocko <mhocko@xxxxxxxx>
Subject: mm, oom: mark all killed tasks as oom victims

Patch series "oom, memcg: do not report racy no-eligible OOM".

This is a followup for
http://lkml.kernel.org/r/20181010151135.25766-1-mhocko@xxxxxxxxxx which
has been nacked mostly because Tetsuo was able to find a simple workload
which can trigger a race where no-eligible task is reported without a good
reason.  I believe the patch2 addresses that issue and we do not have to
play dirty games with throttling just because of the race.  I still
believe that patch proposed in
http://lkml.kernel.org/r/20181010151135.25766-1-mhocko@xxxxxxxxxx is a
useful one but this can be addressed later.

This series comprises 2 patches.  The first one is something I meant to do
loooong time ago, I just never have time to do that.  We need it here to
handle CLONE_VM without CLONE_SIGHAND cases.  The second patch closes the
race.


This patch (of 2):

Historically we have called mark_oom_victim only to the main task selected
as the oom victim because oom victims have access to memory reserves and
granting the access to all killed tasks could deplete memory reserves very
quickly and cause even larger problems.

Since only a partial access to memory reserves is allowed there is no
longer this risk and so all tasks killed along with the oom victim can be
considered as well.

The primary motivation for that is that process groups which do not shared
signals would behave more like standard thread groups wrt oom handling
(aka tsk_is_oom_victim will work the same way for them).

- Use find_lock_task_mm to stabilize mm as suggested by Tetsuo

Link: http://lkml.kernel.org/r/20190107143802.16847-2-mhocko@xxxxxxxxxx
Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
Cc: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/oom_kill.c |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/mm/oom_kill.c~mm-oom-marks-all-killed-tasks-as-oom-victims
+++ a/mm/oom_kill.c
@@ -892,6 +892,7 @@ static void __oom_kill_process(struct ta
 	 */
 	rcu_read_lock();
 	for_each_process(p) {
+		struct task_struct *t;
 		if (!process_shares_mm(p, mm))
 			continue;
 		if (same_thread_group(p, victim))
@@ -911,6 +912,11 @@ static void __oom_kill_process(struct ta
 		if (unlikely(p->flags & PF_KTHREAD))
 			continue;
 		do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID);
+		t = find_lock_task_mm(p);
+		if (!t)
+			continue;
+		mark_oom_victim(t);
+		task_unlock(t);
 	}
 	rcu_read_unlock();
 
_

Patches currently in -mm which might be from mhocko@xxxxxxxx are

mm-oom-marks-all-killed-tasks-as-oom-victims.patch
memcg-do-not-report-racy-no-eligible-oom-tasks.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux