On Tue 14-01-25 14:23:22, Johannes Weiner wrote: > On Tue, Jan 14, 2025 at 07:13:07PM +0100, Michal Hocko wrote: > > Anyway, have you tried to reproduce with > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 7b3503d12aaf..9c30c442e3b0 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -1627,7 +1627,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > * A few threads which were not waiting at mutex_lock_killable() can > > * fail to bail out. Therefore, check again after holding oom_lock. > > */ > > - ret = task_is_dying() || out_of_memory(&oc); > > + ret = out_of_memory(&oc); > > > > unlock: > > mutex_unlock(&oom_lock); > > > > proposed by Johannes earlier? This should help to trigger the oom reaper > > to free up some memory. > > Yes, I was wondering about that too. > > If the OOM reaper can be our reliable way of forward progress, we > don't need any reserve or headroom beyond memory.max. > > IIRC it can fail if somebody is holding mmap_sem for writing. The exit > path at some point takes that, but also around the time it frees up > all its memory voluntarily, so that should be fine. Are you aware of > other scenarios where it can fail? Setting MMF_OOM_SKIP is the final moment when oom reaper can act. This is after exit_mm_release which releases futex. Also get_user callers shouldn't be holding exclusive mmap_lock as that would deadlock when PF path takes the read lock, right? > What if everything has been swapped out already and there is nothing > to reap? IOW, only unreclaimable/kernel memory remaining in the group. Yes, this is possible. It is also possible the the oom victim depletes oom reserves globally and fail the allocation resulting in the same problem. Reserves do buy some time but do not solve the underlying issue. > It still seems to me that allowing the OOM victim (and only the OOM > victim) to bypass memory.max is the only guarantee to progress. > > I'm not really concerned about side effects. Any runaway allocation in > the exit path (like the vmalloc one you referenced before) is a much > bigger concern for exceeding the physical OOM reserves in the page > allocator. What's a containment failure for cgroups would be a memory > deadlock at the system level. It's a class of kernel bug that needs > fixing, not something we can really work around in the cgroup code. I do agreee that a memory deadlock is not really proper way to deal with the issue. I have to admit that my understanding was based on ENOMEM being properly propagated out of in kernel user page faults. It seems I was wrong about that. On the other hand wouldn't that be a proper way to deal with the issue? Relying on allocations never failing is quite fragile. -- Michal Hocko SUSE Labs