On Tue, May 31, 2016 at 11:44:24PM +0200, Vlastimil Babka wrote: > On 05/30/2016 05:56 PM, Mel Gorman wrote: > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index dba8cfd0b2d6..f2c1e47adc11 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -3232,6 +3232,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > > * allocations are system rather than user orientated > > */ > > ac->zonelist = node_zonelist(numa_node_id(), gfp_mask); > > + ac->preferred_zoneref = first_zones_zonelist(ac->zonelist, > > + ac->high_zoneidx, ac->nodemask); > > + ac->classzone_idx = zonelist_zone_idx(ac->preferred_zoneref); > > page = get_page_from_freelist(gfp_mask, order, > > ALLOC_NO_WATERMARKS, ac); > > if (page) > > > > Even if that didn't help for this report, I think it's needed too > (except the classzone_idx which doesn't exist anymore?). > > And I think the following as well. (the changed comment could be also > just deleted). > Why? The comment is fine but I do not see why the recalculation would occur. In the original code, the preferred_zoneref for statistics is calculated based on either the supplied nodemask or cpuset_current_mems_allowed during the initial attempt. It then relies on the cpuset checks in the slowpath to encorce mems_allowed but the preferred zone doesn't change. With your proposed change, it's possible that the preferred_zoneref recalculation points to a zoneref disallowed by cpuset_current_mems_sllowed. While it'll be skipped during allocation, the statistics will still be against a zone that is potentially outside what is allowed. -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>