On 08/24/2015 02:09 PM, Mel Gorman wrote:
An allocation request will either use the given nodemask or the cpuset current tasks mems_allowed. A cpuset retry will recheck the callers nodemask and while it's trivial overhead during an extremely rare operation, also unnecessary. This patch fixes it. Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2c1c3bf54d15..32d1cec124bc 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3171,7 +3171,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */ struct alloc_context ac = { .high_zoneidx = gfp_zone(gfp_mask), - .nodemask = nodemask, + .nodemask = nodemask ? : &cpuset_current_mems_allowed,
Hm this is a functional change for atomic allocations with NULL nodemask. ac.nodemask is passed down to __alloc_pages_slowpath() which might determine that ALLOC_CPUSET is not to be used (because it's atomic). Yet it would use the restricted ac.nodemask in get_page_from_freelist() and elsewhere.
.migratetype = gfpflags_to_migratetype(gfp_mask), }; @@ -3206,8 +3206,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, /* The preferred zone is used for statistics later */ preferred_zoneref = first_zones_zonelist(ac.zonelist, ac.high_zoneidx, - ac.nodemask ? : &cpuset_current_mems_allowed, - &ac.preferred_zone); + ac.nodemask, &ac.preferred_zone); if (!ac.preferred_zone) goto out; ac.classzone_idx = zonelist_zone_idx(preferred_zoneref);
-- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>