Re: [RFC PATCH] mm, oom: cgroup-aware OOM-killer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 19, 2017 at 3:30 AM, Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> On Thu 18-05-17 17:28:04, Roman Gushchin wrote:
>> Traditionally, the OOM killer is operating on a process level.
>> Under oom conditions, it finds a process with the highest oom score
>> and kills it.
>>
>> This behavior doesn't suit well the system with many running
>> containers. There are two main issues:
>>
>> 1) There is no fairness between containers. A small container with
>> a few large processes will be chosen over a large one with huge
>> number of small processes.
>>
>> 2) Containers often do not expect that some random process inside
>> will be killed. So, in general, a much safer behavior is
>> to kill the whole cgroup. Traditionally, this was implemented
>> in userspace, but doing it in the kernel has some advantages,
>> especially in a case of a system-wide OOM.
>>
>> To address these issues, cgroup-aware OOM killer is introduced.
>> Under OOM conditions, it looks for a memcg with highest oom score,
>> and kills all processes inside.
>>
>> Memcg oom score is calculated as a size of active and inactive
>> anon LRU lists, unevictable LRU list and swap size.
>>
>> For a cgroup-wide OOM, only cgroups belonging to the subtree of
>> the OOMing cgroup are considered.
>
> While this might make sense for some workloads/setups it is not a
> generally acceptable policy IMHO. We have discussed that different OOM
> policies might be interesting few years back at LSFMM but there was no
> real consensus on how to do that. One possibility was to allow bpf like
> mechanisms. Could you explore that path?

I agree, I think it needs more thought. I wonder if the real issue is something
else. For example

1. Did we overcommit a particular container too much?
2. Do we need something like https://lwn.net/Articles/604212/ to solve
the problem?
3. We have oom notifiers now, could those be used (assuming you are interested
in non memcg related OOM's affecting a container
4. How do we determine limits for these containers? From a fariness
perspective

Just trying to understand what leads to the issues you are seeing

Balbir
--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux