On Sat, Feb 28, 2015 at 05:15:58PM -0500, Johannes Weiner wrote: > Overestimating should be fine, the result would a bit of false memory > pressure. But underestimating and looping can't be an option or the > original lockups will still be there. We need to guarantee forward > progress or the problem is somewhat mitigated at best - only now with > quite a bit more complexity in the allocator and the filesystems. We've lived with looping as it is and in practice it's actually worked well. I can only speak for ext4, but I do a lot of testing under very high memory pressure situations, and it is used in *production* under very high stress situations --- and the only time we'e run into trouble is when the looping behaviour somehow got accidentally *removed*. There have been MM experts who have been worrying about this situation for a very long time, but honestly, it seems to be much more of a theoretical than actual concern. So if you don't want to get hints/estimates about how much memory the file system is about to use, when the file system is willing to wait or even potentially return ENOMEM (although I suspect starting to return ENOMEM where most user space application don't expect it will cause more problems), I'm personally happy to just use GFP_NOFAIL everywhere --- or to hard code my own infinite loops if the MM developers want to take GFP_NOFAIL away. Because in my experience, looping simply hasn't been as awful as some folks on this thread have made it out to be. So if you don't like the complexity because the perfect is the enemy of the good, we can just drop this and the file systems can simply continue to loop around their memory allocation calls... or if that fails we can start adding subsystem specific mempools, which would be even more wasteful of memory and probably at least as complicated. - Ted -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>