On Wed 18-12-24 17:00:38, Chen Ridong wrote: > > > On 2024/12/18 15:56, Michal Hocko wrote: > > On Wed 18-12-24 15:44:34, Chen Ridong wrote: > >> > >> > >> On 2024/12/17 20:54, Michal Hocko wrote: > >>> On Tue 17-12-24 12:18:28, Chen Ridong wrote: > >>> [...] > >>>> diff --git a/mm/oom_kill.c b/mm/oom_kill.c > >>>> index 1c485beb0b93..14260381cccc 100644 > >>>> --- a/mm/oom_kill.c > >>>> +++ b/mm/oom_kill.c > >>>> @@ -390,6 +390,7 @@ static int dump_task(struct task_struct *p, void *arg) > >>>> if (!is_memcg_oom(oc) && !oom_cpuset_eligible(p, oc)) > >>>> return 0; > >>>> > >>>> + cond_resched(); > >>>> task = find_lock_task_mm(p); > >>>> if (!task) { > >>>> /* > >>> > >>> This is called from RCU read lock for the global OOM killer path and I > >>> do not think you can schedule there. I do not remember specifics of task > >>> traversal for crgoup path but I guess that you might need to silence the > >>> soft lockup detector instead or come up with a different iteration > >>> scheme. > >> > >> Thank you, Michal. > >> > >> I made a mistake. I added cond_resched in the mem_cgroup_scan_tasks > >> function below the fn, but after reconsideration, it may cause > >> unnecessary scheduling for other callers of mem_cgroup_scan_tasks. > >> Therefore, I moved it into the dump_task function. However, I missed the > >> RCU lock from the global OOM. > >> > >> I think we can use touch_nmi_watchdog in place of cond_resched, which > >> can silence the soft lockup detector. Do you think that is acceptable? > > > > It is certainly a way to go. Not the best one at that though. Maybe we > > need different solution for the global and for the memcg OOMs. During > > the global OOM we rarely care about latency as the whole system is > > likely to struggle. Memcg ooms are much more likely. Having that many > > tasks in a memcg certainly requires a further partitioning so if > > configured properly the OOM latency shouldn't be visible much. But I am > > wondering whether the cgroup task iteration could use cond_resched while > > the global one would touch_nmi_watchdog for every N iterations. I might > > be missing something but I do not see any locking required outside of > > css_task_iter_*. > > Do you mean like that: I've had something like this (untested) in mind diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7b3503d12aaf..37abc94abd2e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1167,10 +1167,14 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg, for_each_mem_cgroup_tree(iter, memcg) { struct css_task_iter it; struct task_struct *task; + unsigned int i = 0 css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it); - while (!ret && (task = css_task_iter_next(&it))) + while (!ret && (task = css_task_iter_next(&it))) { ret = fn(task, arg); + if (++i % 1000) + cond_resched(); + } css_task_iter_end(&it); if (ret) { mem_cgroup_iter_break(memcg, iter); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 1c485beb0b93..3bf2304ed20c 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -430,10 +430,14 @@ static void dump_tasks(struct oom_control *oc) mem_cgroup_scan_tasks(oc->memcg, dump_task, oc); else { struct task_struct *p; + unsigned int i = 0; rcu_read_lock(); - for_each_process(p) + for_each_process(p) { + if (++i % 1000) + touch_softlockup_watchdog(); dump_task(p, oc); + } rcu_read_unlock(); } } -- Michal Hocko SUSE Labs