+ mm-oom_kill-wake-futex-waiters-before-annihilating-victim-shared-mutex.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/oom_kill: wake futex waiters before annihilating victim shared mutex
has been added to the -mm tree.  Its filename is
     mm-oom_kill-wake-futex-waiters-before-annihilating-victim-shared-mutex.patch

This patch should soon appear at
    https://ozlabs.org/~akpm/mmots/broken-out/mm-oom_kill-wake-futex-waiters-before-annihilating-victim-shared-mutex.patch
and later at
    https://ozlabs.org/~akpm/mmotm/broken-out/mm-oom_kill-wake-futex-waiters-before-annihilating-victim-shared-mutex.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Joel Savitz <jsavitz@xxxxxxxxxx>
Subject: mm/oom_kill: wake futex waiters before annihilating victim shared mutex

In the case that two or more processes share a futex located within a
shared mmaped region, such as a process that shares a lock between itself
and a number of child processes, we have observed that when a process
holding the lock is oom killed, at least one waiter is never alerted to
this new development and simply continues to wait.

This is visible via pthreads by checking the __owner field of the
pthread_mutex_t structure within a waiting process, perhaps with gdb.

We identify reproduction of this issue by checking a waiting process of a
test program and viewing the contents of the pthread_mutex_t, taking note
of the value in the owner field, and then checking dmesg to see if the
owner has already been killed.

This issue can be tricky to reproduce, but with the modifications of this
small patch, I have found it to be impossible to reproduce.  There may be
additional considerations that I have not taken into account in this patch
and I welcome any comments and criticism.

Link: https://lkml.kernel.org/r/20211207214902.772614-1-jsavitz@xxxxxxxxxx
Signed-off-by: Nico Pache <npache@xxxxxxxxxx>
Co-developed-by: Nico Pache <npache@xxxxxxxxxx>
Signed-off-by: Joel Savitz <jsavitz@xxxxxxxxxx>
Cc: Waiman Long <longman@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/oom_kill.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/mm/oom_kill.c~mm-oom_kill-wake-futex-waiters-before-annihilating-victim-shared-mutex
+++ a/mm/oom_kill.c
@@ -44,6 +44,7 @@
 #include <linux/kthread.h>
 #include <linux/init.h>
 #include <linux/mmu_notifier.h>
+#include <linux/futex.h>
 
 #include <asm/tlb.h>
 #include "internal.h"
@@ -890,6 +891,7 @@ static void __oom_kill_process(struct ta
 	 * in order to prevent the OOM victim from depleting the memory
 	 * reserves from the user space under its control.
 	 */
+	futex_exit_release(victim);
 	do_send_sig_info(SIGKILL, SEND_SIG_PRIV, victim, PIDTYPE_TGID);
 	mark_oom_victim(victim);
 	pr_err("%s: Killed process %d (%s) total-vm:%lukB, anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB, UID:%u pgtables:%lukB oom_score_adj:%hd\n",
@@ -930,6 +932,7 @@ static void __oom_kill_process(struct ta
 		 */
 		if (unlikely(p->flags & PF_KTHREAD))
 			continue;
+		futex_exit_release(p);
 		do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID);
 	}
 	rcu_read_unlock();
_

Patches currently in -mm which might be from jsavitz@xxxxxxxxxx are

mm-oom_kill-wake-futex-waiters-before-annihilating-victim-shared-mutex.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux