On Tue, 3 Aug 2010, KAMEZAWA Hiroyuki wrote: > One reason I poitned out is that this new parameter is hard to use for admins and > library writers. > old oom_adj was defined as an parameter works as > (memory usage of app)/oom_adj. Where are you getting this definition from? Disregarding all the other small adjustments in the old heuristic, a reduced version of the formula was mm->total_vm << oom_adj. It's a shift, not a divide. That has no sensible meaning. > new oom_score_adj was define as > (memory usage of app * oom_score_adj)/ system_memory > No, it's (rss + swap + oom_score_adj) / bound memory. It's an addition, not a multiplication, and it's a proportion of memory the application is bound to, not the entire system (it could be constrained by cpuset, mempolicy, or memcg). > Then, an applications' oom_score on a host is quite different from on the other > host. This operation is very new rather than a simple interface updates. > This opinion was rejected. > It wasn't rejected, I responded to your comment and you never wrote back. The idea > Anyway, I believe the value other than OOM_DISABLE is useless, You're right in that OOM_DISABLE fulfills may typical use cases to simply protect a task by making it immune to the oom killer. But there are other use cases for the oom killer that you're perhaps not using where a sensible userspace tunable does make a difference: the goal of the heuristic is always to kill the task consuming the most amount of memory to avoid killing tons of applications for subsequent page allocations. We do run important tasks that consume lots of memory, though, and the kernel can't possibly know about that importance. So although you may never use a positive oom_score_adj, although others will, you probably can find a use case for subtracting a memory quantity from a known memory hogging task that you consider to be vital in an effort to disregard that quantity from the score. I'm sure you'll agree it's a much more powerful (and fine-grained) interface than oom_adj. > I have no concerns. I'll use memcg if I want to control this kind of things. > That would work if you want to setup individual memcgs for every application on your system, know what sane limits are for each one, and want to incur the significant memory expense of enabling CONFIG_CGROUP_MEM_RES_CTLR for its metadata. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>