On Fri, Apr 20, 2012 at 3:56 PM, Rik van Riel <riel@xxxxxxxxxx> wrote: > On 04/20/2012 06:50 PM, Ying Han wrote: > >> Regarding the misuse case, here I am gonna layout the ground rule for >> setting up soft_limit: >> >> " >> Never over-commit the system by softlimit. >> " > > >> I think it is reasonable to layout this upfront, otherwise we can not >> make all the misuse cases right. And if we follow that route, lots of >> things will become clear. > > > While that rule looks reasonable at first glance, I do not > believe it is possible to follow it in practice. > > One reason is memory resizing through ballooning in virtual > machines. It is possible for the "physical" memory size to > shrink below the sum of the softlimits. Hmm, can you give more details on that? I assume the soft_limit should be adjusted at run-time based on the memory usage, and in your case, the "physcial" memory size. This is different from hard_limit, which we can over-commit by set it once and live with it. > > Another reason is memory zones and NUMA. It is possible for > one memory zone (or NUMA node) to only have cgroups that > are under their soft limit. > > If this happens to be the one memory zone we can allocate > network buffers from, we could deadlock the system if we > refused to reclaim pages from a cgroup under its limit. Yes, that is the problem we talked about during LSF. Having "per-memcg-per-zone softlimit" sounds too complicated and not practical at all. To deal with that, my current patch is to identify the situation by doing the first round of scanning, and then skip the soft_limit if that is the case. --Ying > > -- > All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>