Re: [PATCH 3/3] memcg oom: bail out from the charge path if no victim found

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 20, 2020 at 5:14 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
>
> On Mon 20-04-20 16:52:05, Yafang Shao wrote:
> > On Mon, Apr 20, 2020 at 4:13 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> > >
> > > On Sat 18-04-20 11:13:11, Yafang Shao wrote:
> [...]
> > > > This patch is to improve it.
> > > > If no victim found in memcg oom, we should force the current task to
> > > > wait until there's available pages. That is similar with the behavior in
> > > > memcg1 when oom_kill_disable is set.
> > >
> > > The primary reason why we force the charge is because we _cannot_ wait
> > > indefinitely in the charge path because the current call chain might
> > > hold locks or other resources which could block a large part of the
> > > system. You are essentially reintroducing that behavior.
> > >
> >
> > Seems my poor English misleads you ?
> > The task is NOT waiting in the charge path, while it is really waiting
> > at the the end of the page fault, so it doesn't hold any locks.
>
> How is that supposed to work? Sorry I didn't really study your patch
> very closely because it doesn't apply on the current Linus' tree and
> your previous 2 patches have reshuffled the code so it is not really
> trivial to have a good picture of the overall logic change.
>

My patch is based on the commit 8632e9b5645b, and I can rebase my
patch for better reviewing.
Here is the overall logic of the patch.
do_page_fault
    mem_cgroup_try_charge
        mem_cgroup_out_of_memory  <<< over the limit of this memcg
            out_of_memory
                  if (!oc->chosen)  <<<< no killable tasks found
                        Set_an_OOM _state_in_the_task <<<< set the oom state
    mm_fault_error
        pagefault_out_of_memory  <<<< VM_FAULT_OOM is returned by the
previously error
            mem_cgroup_oom_synchronize(true)
                 Check_the_OOM_state_and_then_wait_here <<<< check the oom state




> > See the comment above mem_cgroup_oom_synchronize()
>
> Anyway mem_cgroup_oom_synchronize shouldn't really trigger unless the
> oom handling is disabled (aka handed over to the userspace). All other
> paths should handle the oom in the charge path.

Right. Now this patch introduces another patch to enter
mem_cgroup_oom_synchronize().

>  Please have a look at
> 29ef680ae7c2 ("memcg, oom: move out_of_memory back to the charge path")
> for more background and motivation.
>

Before I send this patch, I have read it carefully.

> mem_cgroup_oom_synchronize was a workaround for deadlocks and the side
> effect was that all other charge paths outside of #PF were failing
> allocations prematurely and that had an effect to user space.
>

I guess this side effect is caused by the precision of the page
counter, for example, the page counter isn't modified immdiately after
uncharging the pages - that's the issue we should improve IMHO.

> > > Is the above example a real usecase or you have just tried a test case
> > > that would trigger the problem?
> >
> > On my server I found the memory usage of a container was greater than
> > the limit of it.
> > From the dmesg I know there's no killable tasks becasue the
> > oom_score_adj is set with -1000.
>
> I would really recommend to address this problem in the userspace
> configuration. Either by increasing the memory limit or fixing the
> oom disabled userspace to not consume that much of a memory.
>

This issue can be addressed in the usespace configuration.
But note that there're many containers running on one single host,
what we should do is try to keep the isolation as strong as possible.
If we don't take any action in the kernel, the users will complain to
us that their service is easily effected by the weak isolation of the
container.

> > Then I tried this test case to produce this issue.
> > This issue can be triggerer by the misconfiguration of oom_score_adj,
> > and can also be tiggered by a memoy leak in the task  with
> > oom_score_adj -1000.
>
> Please note that there is not much the system can do about oom disabled
> tasks that leak memory. Even the global case would slowly kill all other
> userspace until it panics due to no eligible tasks. The oom_score_adj
> has a very strong consequences. Do not use it without a very careful
> consideration.

global case -> kill others until the system panic.
container case -> kill others until no tasks can run in the contianer

I think this is the consistent behavior.

Thanks
Yafang




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux