On Tue 24-07-18 06:08:36, Tejun Heo wrote: > Hello, > > On Tue, Jul 24, 2018 at 09:32:30AM +0200, Michal Hocko wrote: [...] > > > There's no reason to put any > > > restrictions on what each cgroup can configure. The only thing which > > > matters is is that the effective behavior is what the highest in the > > > ancestry configures, and, at the system level, it'd conceptually map > > > to panic_on_oom. > > > > Hmm, so do we inherit group_oom? If not, how do we prevent from > > unexpected behavior? > > Hmm... I guess we're debating two options here. Please consider the > following hierarchy. > > R > | > A (group oom == 1) > / \ > B C > | > D > > 1. No matter what B, C or D sets, as long as A sets group oom, any oom > kill inside A's subtree kills the entire subtree. > > 2. A's group oom policy applies iff the source of the OOM is either at > or above A - ie. iff the OOM is system-wide or caused by memory.max > of A. > > In #1, it doesn't matter what B, C or D sets, so it's kinda moot to > discuss whether they inherit A's setting or not. A's is, if set, > always overriding. In #2, what B, C or D sets matters if they also > set their own memory.max, so there's no reason for them to inherit > anything. > > I'm actually okay with either option. #2 is more flexible than #1 but > given that this is a cgroup owned property which is likely to be set > on per-application basis, #1 is likely good enough. > > IIRC, we did #2 in the original implementation and the simplified one > is doing #1, right? No, we've been discussing #2 unless I have misunderstood something. I find it rather non-intuitive that a property outside of the oom domain controls the behavior inside the domain. I will keep thinking about that though. -- Michal Hocko SUSE Labs