Patch "memcg: fix soft lockup in the OOM process" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    memcg: fix soft lockup in the OOM process

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     memcg-fix-soft-lockup-in-the-oom-process.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 79c490b951959dcd6070c3073fba6054b92f6d82
Author: Chen Ridong <chenridong@xxxxxxxxxx>
Date:   Tue Dec 24 02:52:38 2024 +0000

    memcg: fix soft lockup in the OOM process
    
    [ Upstream commit ade81479c7dda1ce3eedb215c78bc615bbd04f06 ]
    
    A soft lockup issue was found in the product with about 56,000 tasks were
    in the OOM cgroup, it was traversing them when the soft lockup was
    triggered.
    
    watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [VM Thread:1503066]
    CPU: 2 PID: 1503066 Comm: VM Thread Kdump: loaded Tainted: G
    Hardware name: Huawei Cloud OpenStack Nova, BIOS
    RIP: 0010:console_unlock+0x343/0x540
    RSP: 0000:ffffb751447db9a0 EFLAGS: 00000247 ORIG_RAX: ffffffffffffff13
    RAX: 0000000000000001 RBX: 0000000000000000 RCX: 00000000ffffffff
    RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000247
    RBP: ffffffffafc71f90 R08: 0000000000000000 R09: 0000000000000040
    R10: 0000000000000080 R11: 0000000000000000 R12: ffffffffafc74bd0
    R13: ffffffffaf60a220 R14: 0000000000000247 R15: 0000000000000000
    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 00007f2fe6ad91f0 CR3: 00000004b2076003 CR4: 0000000000360ee0
    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Call Trace:
     vprintk_emit+0x193/0x280
     printk+0x52/0x6e
     dump_task+0x114/0x130
     mem_cgroup_scan_tasks+0x76/0x100
     dump_header+0x1fe/0x210
     oom_kill_process+0xd1/0x100
     out_of_memory+0x125/0x570
     mem_cgroup_out_of_memory+0xb5/0xd0
     try_charge+0x720/0x770
     mem_cgroup_try_charge+0x86/0x180
     mem_cgroup_try_charge_delay+0x1c/0x40
     do_anonymous_page+0xb5/0x390
     handle_mm_fault+0xc4/0x1f0
    
    This is because thousands of processes are in the OOM cgroup, it takes a
    long time to traverse all of them.  As a result, this lead to soft lockup
    in the OOM process.
    
    To fix this issue, call 'cond_resched' in the 'mem_cgroup_scan_tasks'
    function per 1000 iterations.  For global OOM, call
    'touch_softlockup_watchdog' per 1000 iterations to avoid this issue.
    
    Link: https://lkml.kernel.org/r/20241224025238.3768787-1-chenridong@xxxxxxxxxxxxxxx
    Fixes: 9cbb78bb3143 ("mm, memcg: introduce own oom handler to iterate only over its own threads")
    Signed-off-by: Chen Ridong <chenridong@xxxxxxxxxx>
    Acked-by: Michal Hocko <mhocko@xxxxxxxx>
    Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
    Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
    Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>
    Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx>
    Cc: Michal Koutný <mkoutny@xxxxxxxx>
    Cc: <stable@xxxxxxxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 280bb6969c0bf..3f7cab196eb62 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1242,6 +1242,7 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
 {
 	struct mem_cgroup *iter;
 	int ret = 0;
+	int i = 0;
 
 	BUG_ON(memcg == root_mem_cgroup);
 
@@ -1250,8 +1251,12 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
 		struct task_struct *task;
 
 		css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it);
-		while (!ret && (task = css_task_iter_next(&it)))
+		while (!ret && (task = css_task_iter_next(&it))) {
+			/* Avoid potential softlockup warning */
+			if ((++i & 1023) == 0)
+				cond_resched();
 			ret = fn(task, arg);
+		}
 		css_task_iter_end(&it);
 		if (ret) {
 			mem_cgroup_iter_break(memcg, iter);
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 4de30c6c5183f..f4c8ef863ea79 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -45,6 +45,7 @@
 #include <linux/init.h>
 #include <linux/mmu_notifier.h>
 #include <linux/cred.h>
+#include <linux/nmi.h>
 
 #include <asm/tlb.h>
 #include "internal.h"
@@ -430,10 +431,15 @@ static void dump_tasks(struct oom_control *oc)
 		mem_cgroup_scan_tasks(oc->memcg, dump_task, oc);
 	else {
 		struct task_struct *p;
+		int i = 0;
 
 		rcu_read_lock();
-		for_each_process(p)
+		for_each_process(p) {
+			/* Avoid potential softlockup warning */
+			if ((++i & 1023) == 0)
+				touch_softlockup_watchdog();
 			dump_task(p, oc);
+		}
 		rcu_read_unlock();
 	}
 }




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux