On Mon 24-08-20 08:30:36, Suren Baghdasaryan wrote: > Currently __set_oom_adj loops through all processes in the system to > keep oom_score_adj and oom_score_adj_min in sync between processes > sharing their mm. This is done for any task with more that one mm_users, > which includes processes with multiple threads (sharing mm and signals). > However for such processes the loop is unnecessary because their signal > structure is shared as well. > Android updates oom_score_adj whenever a tasks changes its role > (background/foreground/...) or binds to/unbinds from a service, making > it more/less important. Such operation can happen frequently. > We noticed that updates to oom_score_adj became more expensive and after > further investigation found out that the patch mentioned in "Fixes" > introduced a regression. Using Pixel 4 with a typical Android workload, > write time to oom_score_adj increased from ~3.57us to ~362us. Moreover > this regression linearly depends on the number of multi-threaded > processes running on the system. > Mark the mm with a new MMF_PROC_SHARED flag bit when task is created with > (CLONE_VM && !CLONE_THREAD && !CLONE_VFORK). Change __set_oom_adj to use > MMF_PROC_SHARED instead of mm_users to decide whether oom_score_adj > update should be synchronized between multiple processes. To prevent > races between clone() and __set_oom_adj(), when oom_score_adj of the > process being cloned might be modified from userspace, we use > oom_adj_mutex. Its scope is changed to global and it is renamed into > oom_adj_lock for naming consistency with oom_lock. The combination of > (CLONE_VM && !CLONE_THREAD) is rarely used except for the case of vfork(). > To prevent performance regressions of vfork(), we skip taking oom_adj_lock > and setting MMF_PROC_SHARED when CLONE_VFORK is specified. Clearing the > MMF_PROC_SHARED flag (when the last process sharing the mm exits) is left > out of this patch to keep it simple and because it is believed that this > threading model is rare. Should there ever be a need for optimizing that > case as well, it can be done by hooking into the exit path, likely > following the mm_update_next_owner pattern. > With the combination of (CLONE_VM && !CLONE_THREAD && !CLONE_VFORK) being > quite rare, the regression is gone after the change is applied. > > Fixes: 44a70adec910 ("mm, oom_adj: make sure processes sharing mm have same view of oom_score_adj") > Reported-by: Tim Murray <timmurray@xxxxxxxxxx> > Debugged-by: Minchan Kim <minchan@xxxxxxxxxx> > Suggested-by: Michal Hocko <mhocko@xxxxxxxxxx> > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> I hope we can build on top of this and move oom_core* stuff to the mm_struct to remove all this cruft but I still think that this is conceptually easier to backport to older kernels than a completely new approach. Btw. now that the flag is in place we can optimize __oom_kill_process as well. Not that this particular path is performance sensitive. But it could show up in group oom killing in memcgs. It should be as simple as (I can prepare an official patch unless somebody beat me to it) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index c22f07c986cb..04cf958d0c29 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -906,29 +906,31 @@ static void __oom_kill_process(struct task_struct *victim, const char *message) * That thread will now get access to memory reserves since it has a * pending fatal signal. */ - rcu_read_lock(); - for_each_process(p) { - if (!process_shares_mm(p, mm)) - continue; - if (same_thread_group(p, victim)) - continue; - if (is_global_init(p)) { - can_oom_reap = false; - set_bit(MMF_OOM_SKIP, &mm->flags); - pr_info("oom killer %d (%s) has mm pinned by %d (%s)\n", - task_pid_nr(victim), victim->comm, - task_pid_nr(p), p->comm); - continue; + if (test_bit(MMF_PROC_SHARED, &p->mm->flags)) { + rcu_read_lock(); + for_each_process(p) { + if (!process_shares_mm(p, mm)) + continue; + if (same_thread_group(p, victim)) + continue; + if (is_global_init(p)) { + can_oom_reap = false; + set_bit(MMF_OOM_SKIP, &mm->flags); + pr_info("oom killer %d (%s) has mm pinned by %d (%s)\n", + task_pid_nr(victim), victim->comm, + task_pid_nr(p), p->comm); + continue; + } + /* + * No kthead_use_mm() user needs to read from the userspace so + * we are ok to reap it. + */ + if (unlikely(p->flags & PF_KTHREAD)) + continue; + do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID); } - /* - * No kthead_use_mm() user needs to read from the userspace so - * we are ok to reap it. - */ - if (unlikely(p->flags & PF_KTHREAD)) - continue; - do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID); + rcu_read_unlock(); } - rcu_read_unlock(); if (can_oom_reap) wake_oom_reaper(victim); -- Michal Hocko SUSE Labs