On Wed, 30 Aug 2017, Roman Gushchin wrote: > I've spent some time to implement such a version. > > It really became shorter and more existing code were reused, > howewer I've met a couple of serious issues: > > 1) Simple summing of per-task oom_score doesn't make sense. > First, we calculate oom_score per-task, while should sum per-process values, > or, better, per-mm struct. We can take only threa-group leader's score > into account, but it's also not 100% accurate. > And, again, we have a question what to do with per-task oom_score_adj, > if we don't task the task's oom_score into account. > > Using memcg stats still looks to me as a more accurate and consistent > way of estimating memcg memory footprint. > The patchset is introducing a new methodology for selecting oom victims so you can define how cgroups are compared vs other cgroups with your own "badness" calculation. I think your implementation based heavily on anon and unevictable lrus and unreclaimable slab is fine and you can describe that detail in the documentation (along with the caveat that it is only calculated for nodes in the allocation's mempolicy). With memory.oom_priority, the user has full ability to change that selection. Process selection heuristics have changed over time themselves, it's not something that must be backwards compatibile and trying to sum the usage from each of the cgroup's mm_struct's and respect oom_score_adj is unnecessarily complex. -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html