On 13/05/2016 11:52, Michal Hocko wrote: > On Fri 13-05-16 10:44:30, Mason wrote: >> On 13/05/2016 10:04, Michal Hocko wrote: >> >>> On Tue 10-05-16 13:56:30, Sebastian Frias wrote: >>> [...] >>>> NOTE: I understand that the overcommit mode can be changed dynamically thru >>>> sysctl, but on embedded systems, where we know in advance that overcommit >>>> will be disabled, there's no reason to postpone such setting. >>> >>> To be honest I am not particularly happy about yet another config >>> option. At least not without a strong reason (the one above doesn't >>> sound that way). The config space is really large already. >>> So why a later initialization matters at all? Early userspace shouldn't >>> consume too much address space to blow up later, no? >> >> One thing I'm not quite clear on is: why was the default set >> to over-commit on? > > Because many applications simply rely on large and sparsely used address > space, I guess. What kind of applications are we talking about here? Server apps? Client apps? Supercomputer apps? I heard some HPC software use large sparse matrices, but is it a common idiom to request large allocations, only to use a fraction of it? If you'll excuse the slight trolling, I'm sure many applications don't expect being randomly zapped by the OOM killer ;-) > That's why the default is GUESS where we ignore the cumulative > charges and simply check the current state and blow up only when > the current request is way too large. I wouldn't call denying a request "blowing up". Application will receive NULL, and is supposed to handle it gracefully. "Blowing up" is receiving SIGKILL because another process happened to allocate too much memory. Regards. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>