Support for the p->numa_policy affinity tracking by the scheduler went missing during the mm/ unification: revive and integrate it properly. ( This in particular fixes NUMA_POLICY_MANYBUDDIES, which bug caused a few regressions in various workloads such as numa01 and regressed !THP workloads in particular. ) Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> --- mm/mempolicy.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 2f2095c..6bb9fd0 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -121,8 +121,10 @@ static struct mempolicy default_policy_local = { static struct mempolicy *default_policy(void) { #ifdef CONFIG_NUMA_BALANCING - if (task_numa_shared(current) == 1) - return ¤t->numa_policy; + struct mempolicy *pol = ¤t->numa_policy; + + if (task_numa_shared(current) == 1 && nodes_weight(pol->v.nodes) >= 2) + return pol; #endif return &default_policy_local; } @@ -135,6 +137,11 @@ static struct mempolicy *get_task_policy(struct task_struct *p) int node; if (!pol) { +#ifdef CONFIG_NUMA_BALANCING + pol = default_policy(); + if (pol != &default_policy_local) + return pol; +#endif node = numa_node_id(); if (node != -1) pol = &preferred_node_policy[node]; @@ -2367,7 +2374,8 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long shift = PAGE_SHIFT; target_node = interleave_nid(pol, vma, addr, shift); - break; + + goto out_keep_page; } case MPOL_PREFERRED: -- 1.7.11.7 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>