On Wed 17-05-17 09:48:25, Cristopher Lameter wrote: > On Wed, 17 May 2017, Michal Hocko wrote: > > > > > So how are you going to distinguish VM_FAULT_OOM from an empty mempolicy > > > > case in a raceless way? > > > > > > You dont have to do that if you do not create an empty mempolicy in the > > > first place. The current kernel code avoids that by first allowing access > > > to the new set of nodes and removing the old ones from the set when done. > > > > which is racy and as Vlastimil pointed out. If we simply fail such an > > allocation the failure will go up the call chain until we hit the OOM > > killer due to VM_FAULT_OOM. How would you want to handle that? > > The race is where? If you expand the node set during the move of the > application then you are safe in terms of the legacy apps that did not > include static bindings. I am pretty sure it is describe in those changelogs and I won't repeat it here. > If you have screwy things like static mbinds in there then you are > hopelessly lost anyways. You may have moved the process to another set > of nodes but the static bindings may refer to a node no longer > available. Thus the OOM is legitimate. The point is that you do _not_ want such a process to trigger the OOM because it can cause other processes being killed. > At least a user space app could inspect > the situation and come up with custom ways of dealing with the mess. I do not really see how would this help to prevent a malicious user from playing tricks. -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html