Is your question about kvm, libvirt or some red hat product? Your posting to kvm list but it sounds like it's a libvirt question. Because in KVM the virtual machine was a regular Linux process you can leverage numa in the same way you would for any other process. eg. set numa on your host then launch kvm with numactl numactl -m 0 --physcpubind=0,8 qemu-kvm ......... That doesn't mean your creating some numa structure inside the vm, it just means that the VM's large amount of memory is backed by a numa node so you're get improved memory performance. --- - "Steve Brown" <stevebrown@xxxxxxxxxxxxxxxx> wrote: > From: "Steve Brown" <stevebrown@xxxxxxxxxxxxxxxx> > To: kvm@xxxxxxxxxxxxxxx > Sent: Saturday, November 21, 2009 11:20:22 AM GMT -05:00 US/Canada Eastern > Subject: Can't get guests to recognize NUMA architecture as alluded to in Redhat marketing material > > So, based on the following lines from the Redhat PDF on KVM: > > ....support for large memory systems with NUMA and integrated memory > controllers.... > > ....NUMA support allows virtual machines to efficiently access large > amounts > of memory.... > > I decided to try out KVM as an alternative to the Xen setup we have been > using where guests are pinned to nodes and limited (by choice) to only > the available RAM at said node. This is a two socket, eight core, 72GB > system. > > So I installed CentOS 5.4 and proceeded to use virsh-install to create a > guest, simply a CentOS 5.4 guest. I allocated it 40GB or so of RAM to be > sure memory allocation would cross node boundaries. I tried using > "vcpus=8", "cpuset=auto", "cpuset=1,2 vcpus=8" (that one caused all > sorts of problems and CPU lockups), "cpuset=1,2 vcpus=2", "cpuset=1,2" > > No matter what I still see only one NUMA node in the guest from numastat > > So what's going on here. Is the PDF misleading? Does a guest not need to > know about NUMA and all scheduling/NUMAness handled by KVM? Am I missing > some magical configuration line in the XML so the guest understands it's > NUMAness? When allocating memory to the guest does the virsh wrapper > make all the right backend calls to allocate exactly 50% of requested > memory from each physical socket's half of total system memory, in this > case 20GB from one socket and 20GB from the other? > > Any useful comments appreciated. > > Thanks! > > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html