This patch set attempts to implement a pseudo-interleaving policy for workloads that do not fit in one NUMA node. For each NUMA group, we track the NUMA nodes on which the workload is actively running, and try to concentrate the memory on those NUMA nodes. Unfortunately, the scheduler appears to move tasks around quite a bit, leading to nodes being dropped from the "active nodes" mask, and re-added a little later, causing excessive memory migration. I am not sure how to solve that. Hopefully somebody will have an idea :) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>