On 05/09/2012 04:05 PM, Chegu Vinod wrote: > Hello, > > On an 8 socket Westmere host I am attempting to run a single guest and > characterize the virtualization overhead for a system intensive > workload (AIM7-high_systime) as the size of the guest scales (10way/64G, > 20way/128G, ... 80way/512G). > > To do some comparisons between the native vs. guest runs. I have > been using "numactl" to control the cpu node & memory node bindings for > the qemu instance. For larger guest sizes I end up binding across multiple > localities. for e.g. a 40 way guest : > > numactl --cpunodebind=0,1,2,3 --membind=0,1,2,3 \ > qemu-system-x86_64 -smp 40 -m 262144 \ > <....> > > I understand that actual mappings from a guest virtual address to host physical > address could change. > > Is there a way to determine [at a given instant] which host's NUMA node is > providing the backing physical memory for the active guest's kernel and > also for the the apps actively running in the guest ? > > Guessing that there is a better way (some tool available?) than just > diff'ng the per node memory usage...from the before and after output of > "numactl --hardware" on the host. > Not sure if that's what you want, but there's Documentation/vm/pagemap.txt. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html