On 11/12/2012 11:04 AM, Peter Zijlstra wrote:
We change the load-balancer to prefer moving tasks in order of: 1) !numa tasks and numa tasks in the direction of more faults 2) allow !ideal tasks getting worse in the direction of faults 3) allow private tasks to get worse 4) allow shared tasks to get worse This order ensures we prefer increasing memory locality but when we do have to make hard decisions we prefer spreading private over shared, because spreading shared tasks significantly increases the interconnect bandwidth since not all memory can follow.
Combined with the fact that we only turn a certain amount of memory into NUMA ptes each second, could this result in a program being classified as a private task one second, and a shared task a few seconds later? What does the code do to prevent such an oscillating of task classification? (which would have consequences for the way the task's NUMA placement is handled, and might result in the task moving from node to node needlessly) -- All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>