On Tue, 25 Oct 2011, Colin Cross wrote: > > On Tue, 25 Oct 2011, Mel Gorman wrote: > > > >> That said, it will be difficult to remember why checking __GFP_NOFAIL in > >> this case is necessary and someone might "optimitise" it away later. It > >> would be preferable if it was self-documenting. Maybe something like > >> this? (This is totally untested) > >> > > > > __GFP_NOFAIL _should_ be optimized away in this case because all he's > > passing is __GFP_WAIT | __GFP_NOFAIL. That doesn't make any sense unless > > all you want to do is livelock. > > __GFP_NOFAIL is not set in the case that I care about. If my change > is hit, no forward progress has been made, so I agree it should not > honor __GFP_NOFAIL. > I was responding to Mel's comment, not your case. > > __GFP_NOFAIL doesn't mean the page allocator would infinitely loop in all > > conditions. That's why GFP_ATOMIC | __GFP_NOFAIL actually fails, and I > > would argue that __GFP_WAIT | __GFP_NOFAIL should fail as well since it's > > the exact same condition except doesn't have access to the extra memory > > reserves. > > > > Suspend needs to either set __GFP_NORETRY to avoid the livelock if it's > > going to disable all means of memory reclaiming or freeing in the page > > allocator. Or, better yet, just make it GFP_NOWAIT. > > > > It would be nice to give compaction and the slab shrinker a chance to > recover a few pages, both methods will work fine in suspend. Ok, so __GFP_NORETRY it is. Just make sure that when pm_restrict_gfp_mask() masks off __GFP_IO and __GFP_FS that it also sets __GFP_NORETRY even though the name of the function no longer seems appropriate anymore. > GFP_NOWAIT will prevent them from ever running, and __GFP_NORETRY will > give up even if they are making progress but haven't recovered enough > pages. > These are all order-3 or smaller allocations where fragmentation isn't a big issue. If a call into direct compaction or reclaim fails to reclaim that small amount of contiguous memory, what makes you believe that a second call will?