On 06/30/2012 01:50 AM, David Rientjes wrote:
On Fri, 29 Jun 2012, Andrew Morton wrote:
I've tested this patch on numa machines with 2, 4 and 8 nodes and
measured speed of memory access inside of KVM guests with memory pinned
to one of nodes with this benchmark:
http://pholasek.fedorapeople.org/alloc_pg.c
Population standard deviations of access times in percentage of average
were following:
merge_nodes=1
2 nodes 1.4%
4 nodes 1.6%
8 nodes 1.7%
merge_nodes=0
2 nodes 1%
4 nodes 0.32%
8 nodes 0.018%
ooh, numbers! Thanks.
Ok, the standard deviation increases when merging pages from nodes with
remote distance, that makes sense. But if that's true, then you would
restrict either the entire application to local memory with mempolicies or
cpusets, or you would use mbind() to restrict this memory to that set of
nodes already so that accesses, even with ksm merging, would have
affinity.
While you are right for case you write your own custom application,
but I think the KVM guest case is little bit more problomatic in case
the guest memory must be splitted across serval nodes.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>