On Mon, Mar 1, 2021 at 4:24 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > On Fri 26-02-21 11:19:51, Yang Shi wrote: > > On Fri, Feb 26, 2021 at 8:42 AM Yang Shi <shy828301@xxxxxxxxx> wrote: > > > > > > On Thu, Feb 25, 2021 at 11:30 PM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > > > > > On Thu 25-02-21 18:12:54, Yang Shi wrote: > > > > > When debugging an oom issue, I found the oom_kill counter of memcg is > > > > > confusing. At the first glance without checking document, I thought it > > > > > just counts for memcg oom, but it turns out it counts both global and > > > > > memcg oom. > > > > > > > > Yes, this is the case indeed. The point of the counter was to count oom > > > > victims from the memcg rather than matching that to the source of the > > > > oom. Rememeber that this could have been a memcg oom up in the > > > > hierarchy as well. Counting victims on the oom origin could be equally > > > > > > Yes, it is updated hierarchically on v2, but not on v1. I'm supposed > > > this is because v1 may work in non-hierarchcal mode? If this is the > > > only reason we may be able to remove this to get aligned with v2 since > > > non-hierarchal mode is no longer supported. > > > > BTW, having the counter recorded hierarchically may help out one of > > our usecases. We want to monitor the oom_kill for some services, but > > systemd would wipe out the cgroup if the service is oom killed then > > restart the service from scratch (it means create a brand new cgroup > > with the same name). So this systemd behavior makes the counter > > useless if it is not recorded hierarchically. > > Just to make sure I understand correctly. You have a setup where memcg > for a service has a hard limit configured and it is destroyed when oom > happens inside that memcg. A new instance is created at the same place > of the hierarchy with a new memcg. Your problem is that the oom killed > memcg will not be recorded in its parent oom event and the information > will get lost with the torn down memcg. Correct? Yes. But global oom instead of memcg oom. > > If yes then how do you tell which of the child cgroup was killed from > the parent counter? Or is there only a single child? Not only a single child, but our case is that oom-killed child consumes 90% memory, then global oom would kill it. This definitely doesn't prevent from accounting oom from other children, but we don't have to have a very accurate counter and in our case we can tell 99% oom kill happens with that specific memcg. > > Anyway, cgroup v2 will offer the hierarchical behavior. Do you have any > strong reasons that you cannot use v2? I do prefer to migrate to cgroup v2 personally. But it incurs significant work for orchestration tools, infrastructure configuration, monitoring tools, etc which are out of my control. > -- > Michal Hocko > SUSE Labs