On Wed, 17 May 2017, Michal Hocko wrote: > > > So how are you going to distinguish VM_FAULT_OOM from an empty mempolicy > > > case in a raceless way? > > > > You dont have to do that if you do not create an empty mempolicy in the > > first place. The current kernel code avoids that by first allowing access > > to the new set of nodes and removing the old ones from the set when done. > > which is racy and as Vlastimil pointed out. If we simply fail such an > allocation the failure will go up the call chain until we hit the OOM > killer due to VM_FAULT_OOM. How would you want to handle that? The race is where? If you expand the node set during the move of the application then you are safe in terms of the legacy apps that did not include static bindings. If you have screwy things like static mbinds in there then you are hopelessly lost anyways. You may have moved the process to another set of nodes but the static bindings may refer to a node no longer available. Thus the OOM is legitimate. At least a user space app could inspect the situation and come up with custom ways of dealing with the mess. -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html