Thank you for your quick reply! I understand the NUMA cell concept and I am using CPU pinning in the XML file. For example: <domain type='kvm'> <name>Debian-xxxx</name> <uuid>xxxx</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static' cpuset='6-9,18-21'>8</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> ... </os> ... This guest still hangs while starting up its Linux Kernel (3.2.x.x) ... :( Here is my virsh capabilities output from the host (CentOS 6.3): # virsh capabilities <capabilities> <host> <uuid>00020003-0004-0005-0006-000700080009</uuid> <cpu> <arch>x86_64</arch> <model>SandyBridge</model> <vendor>Intel</vendor> <topology sockets='1' cores='6' threads='2'/> <feature name='pdpe1gb'/> <feature name='osxsave'/> <feature name='tsc-deadline'/> <feature name='dca'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management> <suspend_disk/> </power_management> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='2'> <cell id='0'> <cpus num='12'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='12'/> <cpu id='13'/> <cpu id='14'/> <cpu id='15'/> <cpu id='16'/> <cpu id='17'/> </cpus> </cell> <cell id='1'> <cpus num='12'> <cpu id='6'/> <cpu id='7'/> <cpu id='8'/> <cpu id='9'/> <cpu id='10'/> <cpu id='11'/> <cpu id='18'/> <cpu id='19'/> <cpu id='20'/> <cpu id='21'/> <cpu id='22'/> <cpu id='23'/> </cpus> </cell> </cells> </topology> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.3.0</machine> <machine canonical='rhel6.3.0'>pc</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.3.0</machine> <machine canonical='rhel6.3.0'>pc</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> And the odd thing is this: virsh freecell only provides a total, not a per node list: # virsh freecell Total: 15891284 kB According to this Fedora page http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/ch25s06.html I should see a per node list. Anyway, my Debian guest still does not boot up when I assign more than 4 vcpus to it. Even if I pin all cpus to the same NUMA node. BTW, I have copied my host CPU's configuration and CPU features for my guests (using virt-manager GUI, running remotely on an Ubuntu desktop box). Maybe I should use some predefined CPU type instead of cloning CPU configuration from the host?? Zoltan On 10/24/2012 5:58 PM,
bertrand.louargant@xxxxxxxxxxxxxx wrote:
|
_______________________________________________ CentOS-virt mailing list CentOS-virt@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos-virt