On Fri, Sep 12, 2014 at 10:08:53AM +0200, Michal Hocko wrote: > On Thu 11-09-14 17:33:39, Niv Yehezkel wrote: > > There is no need to fallback and continue computing > > badness for each running process after we have found a > > process currently performing the swapoff syscall. We ought to > > immediately select this process for killing. > > a) this is not only about swapoff. KSM (run_store) is currently > considered oom origin as well. > b) you forgot to tell us what led you to this change. It sounds like a > minor optimization to me. We can potentially skip scanning through > many tasks but this is not guaranteed at all because our task might > be at the very end of the tasks list as well. > c) finally this might select thread != thread_group_leader which is a > minor issue affecting oom report > > I am not saying the change is wrong but please make sure you first > describe your motivation. Does it fix any issue you are seeing? Is this > just something that struck you while reading the code? Maybe it was > /* always select this thread first */ comment for OOM_SCAN_SELECT. > Besides that your process_selected is not really needed. You could test > for chosen_points == ULONG_MAX as well. This would be even more > straightforward because any score like that is ultimate candidate. > > > Signed-off-by: Niv Yehezkel <executerx@xxxxxxxxx> > > --- > > mm/oom_kill.c | 6 +++++- > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > > index 1e11df8..68ac30e 100644 > > --- a/mm/oom_kill.c > > +++ b/mm/oom_kill.c > > @@ -305,6 +305,7 @@ static struct task_struct *select_bad_process(unsigned int *ppoints, > > struct task_struct *g, *p; > > struct task_struct *chosen = NULL; > > unsigned long chosen_points = 0; > > + bool process_selected = false; > > > > rcu_read_lock(); > > for_each_process_thread(g, p) { > > @@ -315,7 +316,8 @@ static struct task_struct *select_bad_process(unsigned int *ppoints, > > case OOM_SCAN_SELECT: > > chosen = p; > > chosen_points = ULONG_MAX; > > - /* fall through */ > > + process_selected = true; > > + break; > > case OOM_SCAN_CONTINUE: > > continue; > > case OOM_SCAN_ABORT: > > @@ -324,6 +326,8 @@ static struct task_struct *select_bad_process(unsigned int *ppoints, > > case OOM_SCAN_OK: > > break; > > }; > > + if (process_selected) > > + break; > > points = oom_badness(p, NULL, nodemask, totalpages); > > if (!points || points < chosen_points) > > continue; > > -- > > 1.7.10.4 > > > > -- > Michal Hocko > SUSE Labs Been reviewing kernel code lately and looking for implementations not fulfilling their actual intention. That's about most of the patches I tend to send. Motivation is pretty much derived from the Eudyptula challenge so there is not concrete reason for this patch. To the point: I have not witnessed any major affects to performance due to this. I fixed the patch and attached it to this mail.
>From 1e92f232e9367565d93629b54117b27b9bbfebda Mon Sep 17 00:00:00 2001 From: Niv Yehezkel <executerx@xxxxxxxxx> Date: Fri, 12 Sep 2014 04:21:48 -0400 Subject: [PATCH] break after selecting process to kill Signed-off-by: Niv Yehezkel <executerx@xxxxxxxxx> --- mm/oom_kill.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 1e11df8..3203578 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -315,7 +315,7 @@ static struct task_struct *select_bad_process(unsigned int *ppoints, case OOM_SCAN_SELECT: chosen = p; chosen_points = ULONG_MAX; - /* fall through */ + break; case OOM_SCAN_CONTINUE: continue; case OOM_SCAN_ABORT: @@ -324,6 +324,8 @@ static struct task_struct *select_bad_process(unsigned int *ppoints, case OOM_SCAN_OK: break; }; + if (chosen_points == ULONG_MAX) + break; points = oom_badness(p, NULL, nodemask, totalpages); if (!points || points < chosen_points) continue; -- 1.7.10.4