On 5/9/2012 6:46 AM, Avi Kivity wrote:
On 05/09/2012 04:05 PM, Chegu Vinod wrote:
Hello,
On an 8 socket Westmere host I am attempting to run a single guest and
characterize the virtualization overhead for a system intensive
workload (AIM7-high_systime) as the size of the guest scales (10way/64G,
20way/128G, ... 80way/512G).
To do some comparisons between the native vs. guest runs. I have
been using "numactl" to control the cpu node& memory node bindings for
the qemu instance. For larger guest sizes I end up binding across multiple
localities. for e.g. a 40 way guest :
numactl --cpunodebind=0,1,2,3 --membind=0,1,2,3 \
qemu-system-x86_64 -smp 40 -m 262144 \
<....>
I understand that actual mappings from a guest virtual address to host physical
address could change.
Is there a way to determine [at a given instant] which host's NUMA node is
providing the backing physical memory for the active guest's kernel and
also for the the apps actively running in the guest ?
Guessing that there is a better way (some tool available?) than just
diff'ng the per node memory usage...from the before and after output of
"numactl --hardware" on the host.
Not sure if that's what you want, but there's Documentation/vm/pagemap.txt.
Thanks for the pointer Avi ! Will give it a try...
FYI... I tried using the recent version of the "crash" utility
(http://people.redhat.com/anderson/) with the upstream kvm.git kernel
(3.4.0-rc4+ ) and it seems to provides VA -> PA mappings for a given app
on a live system.
Also looks like there is an extension to this crash utility... called :
qemu-vtop. which is supposed to give the GPA->HVA->HPA mappings. Need to
give this a try...and see if it works.
Thx!
Vinod
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html