Re: [PATCH] mm, oom: make the calculation of oom badness more accurate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 9, 2020 at 2:26 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
>
> On Thu 09-07-20 10:14:14, Yafang Shao wrote:
> > On Thu, Jul 9, 2020 at 3:02 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> > >
> > > On Wed 08-07-20 10:57:27, David Rientjes wrote:
> > > > On Wed, 8 Jul 2020, Michal Hocko wrote:
> > > >
> > > > > I have only now realized that David is not on Cc. Add him here. The
> > > > > patch is http://lkml.kernel.org/r/1594214649-9837-1-git-send-email-laoar.shao@xxxxxxxxx.
> > > > >
> > > > > I believe the main problem is that we are normalizing to oom_score_adj
> > > > > units rather than usage/total. I have a very vague recollection this has
> > > > > been done in the past but I didn't get to dig into details yet.
> > > > >
> > > >
> > > > The memcg max is 4194304 pages, and an oom_score_adj of -998 would yield a
> > > > page adjustment of:
> > > >
> > > > adj = -998 * 4194304 / 1000 = −4185915 pages
> > > >
> > > > The largest pid 58406 (data_sim) has rss 3967322 pages,
> > > > pgtables 37101568 / 4096 = 9058 pages, and swapents 0.  So it's unadjusted
> > > > badness is
> > > >
> > > > 3967322 + 9058 pages = 3976380 pages
> > > >
> > > > Factoring in oom_score_adj, all of these processes will have a badness of
> > > > 1 because oom_badness() doesn't underflow, which I think is the point of
> > > > Yafang's proposal.
> > > >
> > > > I think the patch can work but, as you mention, also needs an update to
> > > > proc_oom_score().  proc_oom_score() is using the global amount of memory
> > > > so Yafang is likely not seeing it go negative for that reason but it could
> > > > happen.
> > >
> > > Yes, memcg just makes it more obvious but the same might happen for the
> > > global case. I am not sure how we can both alow underflow and present
> > > the value that would fit the existing model. The exported value should
> > > really reflect what the oom killer is using for the calculation or we
> > > are going to see discrepancies between the real oom decision and
> > > presented values. So I believe we really have to change the calculation
> > > rather than just make it tolerant to underflows.
> > >
> >
> > Hi Michal,
> >
> > - Before my patch,
> > The result of oom_badness() is [1,  2 * totalpages),
> > and the result of proc_oom_score() is  [0, 2000).
> >
> > While the badness score in the Documentation/filesystems/proc.rst is: [0, 1000]
> > "The badness heuristic assigns a value to each candidate task ranging from 0
> > (never kill) to 1000 (always kill) to determine which process is targeted"
> >
> > That means, we need to update the documentation anyway unless my
> > calculation is wrong.
>
> No, your calculation is correct. The documentation is correct albeit
> slightly misleading. The net score calculation is indeed in range of [0, 1000].
> It is the oom_score_adj added on top which skews it. This is documented
> as
> "The value of /proc/<pid>/oom_score_adj is added to the badness score before it
> is used to determine which task to kill."
>
> This is the exported value but paragraph "3.2 /proc/<pid>/oom_score" only says
> "This file can be used to check the current score used by the oom-killer is for
> any given <pid>." which is not really explicit about the exported range.
>
> Maybe clarifying that would be helpful. I will post a patch. There are
> few other things to sync up with the current state.
>
> > So the point will be how to change it ?
> >
> > - After my patch
> > oom_badness():  (-totalpages, 2 * totalpages)
> > proc_oom_score(): (-1000, 2000)
> >
> > If we allow underflow, we can change the documentation as "from -1000
> > (never kill) to 2000(always kill)".
> > While if we don't allow underflow,  we can make bellow simple change,
> >
> > diff --git a/fs/proc/base.c b/fs/proc/base.c
> > index 774784587..0da8efa41 100644
> > --- a/fs/proc/base.c
> > +++ b/fs/proc/base.c
> > @@ -528,7 +528,7 @@ static int proc_oom_score(struct seq_file *m,
> > struct pid_namespace *ns,
> >         unsigned long totalpages = totalram_pages + total_swap_pages;
> >         unsigned long points = 0;
> >
> > -       points = oom_badness(task, NULL, NULL, totalpages) *
> > +       points = 1000 + oom_badness(task, NULL, NULL, totalpages) *
> >                                       1000 / totalpages;
> >         seq_printf(m, "%lu\n", points);
> >
> > And then update the documentation as "from 0 (never kill) to 3000
> > (always kill)"
>
> This is still not quite there yet, I am afraid. OOM_SCORE_ADJ_MIN tasks have
> always reported 0 and I can imagine somebody might depend on this fact.

No, I don't think anybody will use the reported 0 to get the
conclusion that it is a OOM_SCORE_ADJ_MIN task.
Because,
    points = oom_badness(task, totalpages) * 1000 / totalpages;
so the points will always be 0 if the return value of
oom_badness(task, totalpages) is less than totalpages/1000.

If the user wants to know whether it is an OOM_SCORE_ADJ_MIN task, he
will always use /proc/[pid]/oom_score_adj to get it, that is more
reliable.

> So you need to special case LONG_MIN at least. It would be also better
> to stick with [0, 2000] range.

I don't know why it must stick with [0, 2000] range.
As the oom_score_adj sticks with [-1000, 1000] range, I think the
proc_oom_score() could be a negative value as well.

-- 
Thanks
Yafang





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux