On Wed 02-12-15 15:07:26, Hillf Danton wrote: > > From: Michal Hocko <mhocko@xxxxxxxx> > > > > __alloc_pages_slowpath retries costly allocations until at least > > order worth of pages were reclaimed or the watermark check for at least > > one zone would succeed after all reclaiming all pages if the reclaim > > hasn't made any progress. > > > > The first condition was added by a41f24ea9fd6 ("page allocator: smarter > > retry of costly-order allocations) and it assumed that lumpy reclaim > > could have created a page of the sufficient order. Lumpy reclaim, > > has been removed quite some time ago so the assumption doesn't hold > > anymore. It would be more appropriate to check the compaction progress > > instead but this patch simply removes the check and relies solely > > on the watermark check. > > > > To prevent from too many retries the stall_backoff is not reseted after > > a reclaim which made progress because we cannot assume it helped high > > order situation. > > > > Signed-off-by: Michal Hocko <mhocko@xxxxxxxx> > > --- > > mm/page_alloc.c | 20 ++++++++------------ > > 1 file changed, 8 insertions(+), 12 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 168a675e9116..45de14cd62f4 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > > bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM; > > struct page *page = NULL; > > int alloc_flags; > > - unsigned long pages_reclaimed = 0; > > unsigned long did_some_progress; > > enum migrate_mode migration_mode = MIGRATE_ASYNC; > > bool deferred_compaction = false; > > @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > > > > /* > > * Do not retry high order allocations unless they are __GFP_REPEAT > > - * and even then do not retry endlessly unless explicitly told so > > + * unless explicitly told so. > > s/unless/or/ Fixed > Acked-by: Hillf Danton <hillf.zj@xxxxxxxxxxxxxxx> Thanks! > > > */ > > - pages_reclaimed += did_some_progress; > > - if (order > PAGE_ALLOC_COSTLY_ORDER) { > > - if (!(gfp_mask & __GFP_NOFAIL) && > > - (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))) > > - goto noretry; > > - > > - if (did_some_progress) > > - goto retry; > > - } > > + if (order > PAGE_ALLOC_COSTLY_ORDER && > > + !(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL))) > > + goto noretry; > > > > /* > > * Be optimistic and consider all pages on reclaimable LRUs as usable > > * but make sure we converge to OOM if we cannot make any progress after > > * multiple consecutive failed attempts. > > + * Costly __GFP_REPEAT allocations might have made a progress but this > > + * doesn't mean their order will become available due to high fragmentation > > + * so do not reset the backoff for them > > */ > > - if (did_some_progress) > > + if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) > > stall_backoff = 0; > > else > > stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF); > > -- > > 2.6.2 > -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>