On Tue, Oct 6, 2015 at 8:55 AM, Eric W. Biederman <ebiederm@xxxxxxxxxxxx> wrote: > > Not to take away from your point about very small allocations. However > assuming allocations larger than a page will always succeed is down > right dangerous. We've required retrying for *at least* order-1 allocations. Exactly because things like fork() etc have wanted them, and: - as you say, you can be unlucky even with reasonable amounts of free memory - the page-out code is approximate and doesn't guarantee that you get buddy coalescing - just failing after a couple of loops has been known to result in fork() and similar friends returning -EAGAIN and breaking user space. Really. Stop this idiocy. We have gone through this before. It's a disaster. The basic fact remains: kernel allocations are so important that rather than fail, you should kill user space. Only kernel allocations that *explicitly* know that they have fallback code should fail, and they should just do the __GFP_NORETRY. So the rule ends up being that we retry the memory freeing loop for small allocations (where "small" is something like "order 2 or less") So really. If you find some particular case that is painful because it wants an order-1 or order-2 allocation, then you do this: - do the allocation with GFP_NORETRY - have a fallback that uses vmalloc or just is able to make the buffer even smaller. But by default we will continue to make small orders retry. As mentioned, we have tried the alternatives. It doesn't work. Linus -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>