Make p->numa_shared flip/flop less around unstable equilibriums, instead require a significant move in either direction to trigger 'dominantly shared accesses' versus 'dominantly private accesses' NUMA status. Suggested-by: Rik van Riel <riel@xxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> --- kernel/sched/fair.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8aa4b36..ab4a7130 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1111,7 +1111,20 @@ static void task_numa_placement(struct task_struct *p) * we might want to consider a different equation below to reduce * the impact of a little private memory accesses. */ - shared = (total[0] >= total[1] / 2); + shared = p->numa_shared; + + if (shared < 0) { + shared = (total[0] >= total[1]); + } else if (shared == 0) { + /* If it was private before, make it harder to become shared: */ + if (total[0] >= total[1]*2) + shared = 1; + } else if (shared == 1 ) { + /* If it was shared before, make it harder to become private: */ + if (total[0]*2 <= total[1]) + shared = 0; + } + if (shared) p->ideal_cpu = sched_update_ideal_cpu_shared(p); else -- 1.7.11.7 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>