On 2024/12/17 20:54, Michal Hocko wrote: > On Tue 17-12-24 12:18:28, Chen Ridong wrote: > [...] >> diff --git a/mm/oom_kill.c b/mm/oom_kill.c >> index 1c485beb0b93..14260381cccc 100644 >> --- a/mm/oom_kill.c >> +++ b/mm/oom_kill.c >> @@ -390,6 +390,7 @@ static int dump_task(struct task_struct *p, void *arg) >> if (!is_memcg_oom(oc) && !oom_cpuset_eligible(p, oc)) >> return 0; >> >> + cond_resched(); >> task = find_lock_task_mm(p); >> if (!task) { >> /* > > This is called from RCU read lock for the global OOM killer path and I > do not think you can schedule there. I do not remember specifics of task > traversal for crgoup path but I guess that you might need to silence the > soft lockup detector instead or come up with a different iteration > scheme. Thank you, Michal. I made a mistake. I added cond_resched in the mem_cgroup_scan_tasks function below the fn, but after reconsideration, it may cause unnecessary scheduling for other callers of mem_cgroup_scan_tasks. Therefore, I moved it into the dump_task function. However, I missed the RCU lock from the global OOM. I think we can use touch_nmi_watchdog in place of cond_resched, which can silence the soft lockup detector. Do you think that is acceptable? @@ -390,7 +391,7 @@ static int dump_task(struct task_struct *p, void *arg) if (!is_memcg_oom(oc) && !oom_cpuset_eligible(p, oc)) return 0; + touch_nmi_watchdog(); task = find_lock_task_mm(p); Best regards, Ridong